Stable Diffusion Prompts Guide — Free & Open Source AI Art
Complete guide to writing effective Stable Diffusion prompts, including negative prompts, LoRA, and advanced techniques.
">How Stable Diffusion Prompts Work
Stable Diffusion prompts use a weighted system with parentheses that gives you precise control over how much influence each element has on the final image. The syntax "(beautiful:1.3)" increases the weight of "beautiful" by 1.3x — meaning the model pays 30% more attention to that concept. You can combine multiple weighted terms: "(cinematic lighting:1.4), (shallow depth of field:1.2), masterpiece, best quality" to fine-tune exactly how much each element matters in the composition. Default weight is 1.0. Values above 1.0 increase emphasis, values below 1.0 decrease it. Practical range is 0.5 to 1.5 — going beyond 1.5 often causes artifacts and oversaturation. You can also use nested parentheses for quick weight adjustments: ((keyword)) is equivalent to (keyword:1.21), and (((keyword))) equals (keyword:1.33). Word order matters in Stable Diffusion — terms at the beginning of your prompt receive more attention than terms at the end. Always put your most important concepts first.
">Mastering Negative Prompts
Negative prompts are equally important in Stable Diffusion — arguably more important than in any other tool. They tell the model what to actively avoid, and they dramatically clean up results by eliminating common AI artifacts. Always start with this universal negative prompt: "worst quality, low quality, blurry, deformed, mutated, extra limbs, bad anatomy, bad hands, missing fingers, extra fingers, watermark, text, signature, ugly, disfigured, jpeg artifacts, out of frame, cropped." This single line eliminates 80% of common Stable Diffusion problems. For portraits specifically, add: "cross-eyed, asymmetric eyes, bad facial proportions, unnatural skin, plastic skin, overexposed face, deformed iris, bad teeth, fused fingers, too many fingers, long nails." For photorealistic work, add style exclusions to prevent the model from drifting into illustration territory: "painting, drawing, illustration, cartoon, anime, 3D render, CGI, sketch." Weight your most critical negatives: "(bad hands:1.4), (deformed:1.3), (blurry:1.2)" ensures these elements are strongly suppressed.
Choosing the Right Checkpoint Model
The checkpoint model matters more than the prompt in many cases. A checkpoint is the base model file (typically 2-7 GB) that determines the fundamental visual style of all generated images. For photorealism, use RealVisXL or JuggernautXL — these checkpoints have been fine-tuned on high-quality photographs and produce images with realistic skin texture, accurate light physics, and natural color science. For anime and illustration, use Anything V5, CounterfeitXL, or AnimagineXL 3.1 — these produce clean, vibrant anime art with proper cel shading and expressive character design. For artistic and creative styles, DreamShaper excels at blending photorealism with fantasy elements. For general-purpose SDXL work, the base SDXL model with a good refiner produces excellent results. Download checkpoints from CivitAI (civitai.com) or Hugging Face — both are free. The same prompt on two different checkpoints can produce wildly different results, so experimenting with checkpoints is just as important as refining your prompt text.
LoRAs: The Secret Weapon
LoRA (Low-Rank Adaptation) models are small add-on files (10-200 MB) that modify how a checkpoint generates images without replacing the entire model. They are the secret weapon of professional Stable Diffusion users. Want to generate in a specific art style? Download that style's LoRA. Want consistent characters across dozens of images? Train a LoRA on 10-20 reference images. Want a specific clothing item, pose style, or visual effect? There is probably a LoRA for it on CivitAI. To use a LoRA in Automatic1111, place the .safetensors file in your models/Lora folder and add
">Advanced Techniques for Professional Results
Beyond basic txt2img, Stable Diffusion offers advanced techniques that unlock professional-grade output. ">img2img lets you use an existing image as a starting point and guide the generation with a text prompt — perfect for refining AI-generated images, transforming real photos, and iterating toward a specific vision. Set denoising strength between 0.3 (subtle changes, close to original) and 0.8 (dramatic transformation). ">ControlNet gives you precise control over poses, edges, depth, and composition by conditioning the generation on a reference image. OpenPose ControlNet extracts body pose skeletons from reference photos. Canny Edge preserves the structural outlines of a reference. Depth maps maintain spatial relationships for architecture and landscapes. ">Adetailer (After Detailer) automatically detects and regenerates faces in your output, fixing the most common Stable Diffusion artifact — distorted or inconsistent faces. ">Ultimate SD Upscale in ComfyUI tiles and upscales generated images to 4K+ resolution without running out of VRAM. ">Regional Prompter lets you assign different prompts to different regions of the image — left side "cityscape at night," right side "ocean at sunset" — for creative composite effects.
">Stable Diffusion Prompt Templates
Here are proven prompt templates for common use cases. Photorealistic portrait: "(photorealistic:1.3), portrait of a [description], [lighting], [camera] [lens], shallow depth of field, raw photo, natural skin texture. Negative: painting, cartoon, anime, deformed, ugly, blurry." Anime character: "masterpiece, best quality, 1girl/1boy, [description], [pose], [background], anime style, cel shading, detailed. Negative: worst quality, low quality, bad anatomy, extra limbs, blurry." Product shot: "(commercial product photography:1.3), [product] on [surface], [lighting], clean composition, sharp focus, 8K detail. Negative: blurry, watermark, text, people, busy background." Fantasy landscape: "(concept art:1.2), epic [landscape description], volumetric lighting, dramatic sky, detailed environment, matte painting style. Negative: low quality, blurry, text, watermark." All prompts on PromptSpace work with Stable Diffusion — just remove Midjourney-specific parameters like --ar and --v, add appropriate weights, and include our recommended negative prompts. Browse our library at promptspace.in for thousands of Stable Diffusion-ready prompts.