在 deforum 中,初始化视频路径的格式如下: |Parameter|Description| |-|-| |video_init_path|Path to the input video. This can also be a URL as seen by the default value.| |video_init_path|Path to the video you want to diffuse. Can't use a URL like init_image|
|Parameter|Description|<br>|-|-|<br>|video_init_path|Path to the input video.This can also be a URL as seen by the default value.|<br>|use_mask_video|Toggles video mask.|<br>|extract_from_frame|First frame to extract from in the specified video.|<br>|extract_to_frame|Last frame to extract from the specified video.|<br>|extract_nth_frame|How many frames to extract between to and from extract frame.1 is default,extracts every frame.|<br>|video_mask_path|Path to video mask.Can be URL or PATH.|The rest of the mask settings behave just like regular img2img in A1111 webui.[heading4]Parseq[content]The Parseq dropdown is for parsing the JSON export from Parseq.I have a separate guide on how to use Parseq[here](https://rentry.org/AnimAnon-Parseq).[heading3]Video Input[content]|Parameter|Description|<br>|-|-|<br>|video_init_path|Path to the video you want to diffuse.Can't use a URL like init_image|<br>|overwrite_extracted_frames|Re-Extracts the input video frames every run.Make sure this is off if you already have the extracted frames to begin diffusion immediately.|<br>|use_mask_video|Toggle to use a video mask.You will probably need to generate your own mask.You could run a video through batch img2img and extract the masks every frame from Detection Detailer or use the Depth Mask script.|<br>|extract_nth_frame|Skips frames in the input video to provide an image to diffuse upon.For example:A value of 1 will Diffuse every frame.2 will skip every other frame.|
|Parameter|Description|<br>|-|-|<br>|video_init_path|Path to the input video.This can also be a URL as seen by the default value.|<br>|use_mask_video|Toggles video mask.|<br>|extract_from_frame|First frame to extract from in the specified video.|<br>|extract_to_frame|Last frame to extract from the specified video.|<br>|extract_nth_frame|How many frames to extract between to and from extract frame.1 is default,extracts every frame.|<br>|video_mask_path|Path to video mask.Can be URL or PATH.|The rest of the mask settings behave just like regular img2img in A1111 webui.[heading4]Parseq[content]The Parseq dropdown is for parsing the JSON export from Parseq.I have a separate guide on how to use Parseq[here](https://rentry.org/AnimAnon-Parseq).[heading3]Video Input[content]|Parameter|Description|<br>|-|-|<br>|video_init_path|Path to the video you want to diffuse.Can't use a URL like init_image|<br>|overwrite_extracted_frames|Re-Extracts the input video frames every run.Make sure this is off if you already have the extracted frames to begin diffusion immediately.|<br>|use_mask_video|Toggle to use a video mask.You will probably need to generate your own mask.You could run a video through batch img2img and extract the masks every frame from Detection Detailer or use the Depth Mask script.|<br>|extract_nth_frame|Skips frames in the input video to provide an image to diffuse upon.For example:A value of 1 will Diffuse every frame.2 will skip every other frame.|
在提示词框内按JSON格式输入不同帧数时的提示词可在该网站进行格式检测,提示红色会报错,绿色为正确格式,网站:https://odu.github.io/slingjsonlint/在正向、反向提示词框内输入动画生成时无需改变的通用提示词[heading2]四、初始[content]设置初始参考图,勾选"使用初始化"进行启用将图片链接复制到"初始化图像"内注意链接外部如有引号需去除,内部不要有中文,数值间的横杠为反斜杠/[heading2]五、Controlnet[content]勾选"启用"后,开始使用controlnet,默认有5个controlnet模型可以启用,选择合适的cn预处理器、模型、权重等参数(注意无法预览图片,需要将图片链接复制到路径文本框内!注意有图片蒙版路径,但是不能使用)[heading2]六、输出[content]FPS:帧速率,代表每秒出多少张图,最大帧数/帧速率等于生成动画的时长可设置添加背景音乐、放大图片、删除图片等操作,不建议在deforum中进行操作视频生成完成后,点击"生成后点击这里显示视频",可以在网页进行预览,默认保存至图生图文件夹[heading2]七、预设使用[content]复制预设txt文件路径,粘贴到图片预览窗口底部的预设文件地址栏处点击“load all settings”加载预设预设加载完成后,可替换模型、初始化图像、提示词内容等信息保存调整后的预设,保存到已有的文件夹路径,点击“储存设置”后,自动创建预设文本,需手动保存初始图片