Principle

Soul Card - Style Transfer

Style transfer refers to the transformation of images in two different domains, providing a style image, transforming any image into the delegated style, and preserving the content of the original image. Forart.ai will use deep learning for automatic texture modeling, so that style transfer can quickly transfer from the original single-mode single-style slow transfer to the current single-mode arbitrary style. Artists or art lovers can use the style transfer module of this project, select their favorite art style or well-known artist style, and combine their own creative content to achieve high-level artistic creation.

The main features of Forart.ai's style transfer technology are:

  • Real-time fast

    The network takes over 30 hours to train on the GPU. The model takes about 10s to generate an image on the CPU and about 0.02s on the GPU.

  • Single model arbitrary style transfer

    In some previous studies, to generate multi-style images, people need to add new style images to the model for retraining, but our model only needs to calculate the transformation parameters of the style feature map offline. Any style transfer can be achieved.

  • The effect of generating pictures is mind-blowing

    The generated effect is more subjective, you can try it out, but it is better than the models in other papers in terms of the color distribution and sensory comfort of the generated image.

Magic Wand - AI Creation

When humans hear or read a story, we immediately draw a visual representation in our minds. The ability to visualize and understand the intricate relationship between the visual world and language is so natural that we rarely think about it. Inspired by how humans visualize scenes, text-to-NFTs understand the relationship between vision and language and is able to create NFTs that reflect the meaning of textual descriptions.

Forart.ai uses Generative Adversarial Networks (GANs) to train generative models for NFTs in a fully unsupervised manner. GANs have sparked a lot of interest and advanced research work in synthesizing images. They defined the image synthesis task as a two-player game of two competing artificial neural networks. The generator network is trained to generate real samples, while the discriminator network is trained to distinguish between real and generated images. The training objective of the generator is to fool the discriminator.

Inspired by how humans draw mental pictures, an intuitive interface for conditional image synthesis can be achieved by using textual descriptions.

Last updated