Runway developers have opened access for everyone to their Gen-2 neural network, which can generate short videos based on a text description.
You can try out Gen-2 after a quick registration (there is authorization through a Google account). In total, you can create 105 seconds of video for free. You can do this both through the Runway website and through the mobile app for iOS.
On the site, you can choose to process your video using the Gen-1 model or create a video from scratch using Gen-2.
In the second case, you only need to enter your request in English. For example, here is the result for the query "rain in the tropical forest" (rain in the tropical forest) — it turned out fine.
The Gen-2 video lasts only a few seconds. You can download it to your device here.
In the case of the previous neural network model, Gen-1, it is enough to download the video, and in the menu on the right select one of the styling options:
Below you can tweak the settings, and then you need to click Preview styles — the neural network will show static images from your video after generation. It remains to select an option and generate the full video.
Gen-1 does not recognize well what exactly is presented in the video, so the results of styling are not always pleasing. Nevertheless, it is still possible to get something original.
Here, for example, is a video with a stylization of a salute over the Neva River according to one of the presets.
And here is the original video.
All received videos can be downloaded, but do not forget about the limit of 105 seconds. The remaining limit is displayed at the top of the site.