linvanoak's avatar

linvanoak

Generative AI, 3D, CGI, VFX
159 Watchers309 Deviations
17.3K
Pageviews
Artist // Varied
Follow me on
My Bio

I enjoy creating images with various tools and software.


Equipment used:


Sony NEX and ILCE cameras

Sony Alpha and E-mount lenses


Software used:


Adobe Creative Cloud Applications

Blender

ComfyUI for Stable Diffusion

Daz Studio

E-on Vue Infinite

Otoy Octane Render

Pixologic Zbrush

Sony Vegas Pro

Ultraforge

Video Copilot Elements 3D

For years Daz Productions Inc has been my primary source for licensed 3D assets. On April 3rd 2024 Daz AI Studio was introduced as online service for generative AI. A moment to contemplate what we can do now and what could be next. With Stable Diffusion and ComfyUI we currently have access to offline img2img workflows. With ControlNet we can guide the pose of characters. With IP Adapter we can customize the look of characters to get consistent results. We can combine text prompts and image inputs to generate images. To make use of my library of 10'000+ licensed 3d models I setup a 3D scene and render out an image. That image can be used in an img2img workflow with Stable Diffusion. As a next step in this evoluation at some point 3D models could be used as input. Which company will be the first to release an offline 3d2img generative AI? ### You can find more information about why Daz 3D choose Stability AI as partner to release Daz AI Studio on PR Newswire: Daz 3D Unveils Daz
Join the community to add your comment. Already a deviant? Log In
The ControlNet in ComfyUI can be used to reduce some of the randomness of Generative AI. Based on a source image a pose can be used as reference input. In Daz Studio a couple pose was created. The image was rendered in Iray using the White Mode. White Mode is quick to render. The pose and the expression of the face are detailed enough to be readable. In ComfyUI the rendered image was used as input in a Canny Edge ControlNet workflow. The Canny Edge node will interpret the source image as line art. Control Net Stacker strength: 0.5 start_percent: 0.0 end_percent: 0.75 With strength at 0.5 the AI will have some room to make small adjustments to the pose. The end_percent of 0.75 indicates that the AI will stop refering to the image after 75% of all steps have been calculated. Experiment with those values to find the specific settings for your image. A licensed Daz3D character was used as source for a ReActor node in ComfyUI to replace the face generated by the AI.
Join the community to add your comment. Already a deviant? Log In
Customizing 3D Scenes in Daz Studio This journal entry features examples of frequent workflow steps: Lighting Set Decoration Conserving VRAM Colors & Materials The following default scene was used: The Streets Of Venice; 45847; by Stonemason LIGHTING​​​​​​​ The Nvidia Iray Sun - Sky environment settings allow to simulate daylight situations. SET DECORATION Props can be added to the scene depending on the specific vision of a project. In this example a table from the following product was used: Enchanted Ballroom; 36659; by Mely3D CONSERVING VRAM To render with a GPU the scene must fit into its VRAM. Instances help to create copies of 3d objects without raising the memory requirements. COLORS & MATERIALS Shaders can be edited to change the color and material properties of any surface. ADDING DETAILS To fine-tune the look of the scene custom 3D objects and textures can be used. IMAGE VARIATION By changing the lighting and props alternative versions of the
Join the community to add your comment. Already a deviant? Log In

Profile Comments 36

Join the community to add your comment. Already a deviant? Log In
Thank you for the watch :)
... And thanks for the watch!
I like your way compositing!
Tank you very much for the watch !! :)
Hi Lin :) Thanks for adding me to your watchlist
Thaaanks for the watch hihi =) 
Thank you for the watch, my friend. I look forward to seeing more of your work!