Skip to main content
MindStudio
Pricing
Blog About
My Workspace
MidjourneyImage GenerationPrompt Engineering

How to Get the Best Results from MidJourney V8 Alpha

MidJourney V8 requires a different workflow than V7. Learn how to use stylize settings, personalization codes, and style creator to get great outputs.

MindStudio Team
How to Get the Best Results from MidJourney V8 Alpha

Why V8 Needs a Different Approach Than V7

MidJourney V8 Alpha is not a refined version of V7 with better rendering. It’s a model that thinks about your prompts differently, and the prompting habits you built with V7 will produce mediocre results until you adjust.

V8 launched in alpha through the MidJourney web interface in early 2025. The headline improvements are real: noticeably sharper photorealism, much better text rendering inside images, and more accurate handling of complex, detailed prompts. But those improvements come with a tradeoff. V8 is more literal than V7 and applies far less artistic polish by default.

In V7, a prompt like “woman in a forest” would generate something compositionally interesting with warm, natural-looking light. In V8, the same prompt often produces something technically accurate but visually flat — because V8 waits for you to specify the details. Image generation with MidJourney V8 rewards specificity in ways earlier versions didn’t, making prompt engineering a more deliberate part of the workflow.

This guide covers the practical changes: how to structure prompts for V8, which stylize values work best for different output types, how personalization codes work, and how to use the Style Creator to build a consistent visual language across a project.

What’s Actually Different About MidJourney V8

Understanding the specific changes helps you adapt faster, rather than guessing your way through every prompt.

Prompt interpretation became more literal

V7 used creative interpretation to fill in whatever you didn’t specify. V8 leans toward delivering exactly what you asked for — nothing more. This is better when you know precisely what you want. It’s a problem when your prompts relied on the model’s judgment to supply atmosphere, drama, or mood automatically.

The default aesthetic shifted toward realism

V8’s baseline output looks more like a photograph than a rendered illustration. V7 had a natural tendency toward artistic polish — something painterly often appeared without being asked for. V8 starts closer to neutral. If you want cinematic, painterly, or stylized results, you need to specify them.

Text rendering improved significantly

Short words and signs inside images are now meaningfully more accurate. This was a consistent weak point in earlier MidJourney versions. In V8, one to three words — product names on packaging, signs in scenes, short labels — are now reliable enough to use intentionally. Longer text strings still degrade, but the threshold improved substantially.

Style and personalization tools are more central

The personalization feature and the Style Creator existed in earlier versions but are much more integrated in V8. They’re worth treating as core workflow tools — not extras you try when you’re bored — because they’re now reliable enough to build processes around.

Draft mode is available

V8 includes a draft mode that generates images significantly faster at lower quality. This matters for iteration. Test directions in draft mode and commit full-quality renders only once a prompt is producing what you want.

How to Write Prompts That Work in V8

The shift from V7 to V8 prompting is largely about moving from keyword clusters to descriptive language. V7 worked reasonably well with comma-separated keyword lists. V8 handles natural-language descriptions better and responds to specificity in ways V7 didn’t reward.

Specify lighting explicitly

This is the highest-impact change you can make. V8 will not add dramatic or interesting lighting automatically. You need to name it.

Instead of: dramatic portrait, studio lighting

Try: Portrait lit from a single softbox positioned 45 degrees to the left, creating a strong shadow across the right side of the face, against a dark gray background

Useful lighting descriptors that work well in V8:

  • “soft overcast natural light”
  • “warm backlight creating rim lighting on the hair”
  • “harsh midday sun from above”
  • “neon reflections on wet pavement at night”
  • “golden hour directional light from the right, casting long shadows”

Add camera and lens details for photorealistic work

V8 responds well to photography terminology. These descriptors help calibrate perspective, depth of field, and distortion — and they signal to the model the kind of image you’re after.

  • shot on an 85mm lens, f/1.8, shallow depth of field
  • wide-angle lens, slight edge distortion
  • 35mm film grain, slightly desaturated
  • medium format look, high dynamic range

Use negative prompts more deliberately

The --no parameter follows instructions more reliably in V8 than earlier versions. If backgrounds are consistently too busy, add --no cluttered background, text, watermarks. If portraits keep generating accessories you didn’t want, list them: --no earrings, necklace, hat. Build a small personal library of --no strings for your most common output types.

Set aspect ratio from the start

V8’s composition adapts to the aspect ratio you specify. Changing aspect ratio mid-iteration often requires starting over because the compositional choices V8 made for a 1:1 square don’t translate cleanly to 16:9 or 4:5. Decide early and lock it in before iterating.

Write in connected descriptions, not keyword lists

Instead of: mountain, fog, sunrise, epic, cinematic, photography

Try: A snow-capped mountain emerging from fog at sunrise, dramatic warm light on the peaks, wide-angle composition, national geographic photography style, photorealistic

The second version gives V8 a coherent scene to work toward. The first gives it a list of attributes to balance, which produces something average across all of them.

Stylize Settings: Finding the Right Range for Your Work

The --stylize parameter (also --s) controls how much creative latitude V8 applies to your prompt. The range runs from 0 to 1000, and the MidJourney parameter documentation covers the full technical reference if you want it.

Stylize RangeEffect
0–50Very literal. Minimal artistic interpretation. Good for product shots or technical references.
100 (default)Balanced between prompt accuracy and aesthetic quality.
200–400Model adds compositional and aesthetic choices. Good for editorial and marketing work.
500–800High creative freedom. Results may drift from your prompt.
1000Maximum artistic interpretation. Treats your prompt as loose inspiration.

For photorealistic work in V8: Use 50–150. V8’s baseline realism is strong enough that it doesn’t need high stylize values to look good. High stylize in photorealistic prompts often adds an artificial quality rather than improving the image.

For illustrative or artistic work: Try 300–600. This range adds visual interest while keeping outputs recognizable as what you described.

The --style raw combination: Adding --style raw disables most of V8’s default aesthetic processing, making outputs even more literal. Pair it with a low stylize value — --s 50 --style raw — for maximum control. This works well for product shots, technical illustrations, or any situation where accuracy outweighs artistic appeal.

The most reliable approach is to test the same prompt at three points — try 50, 200, and 500 — and compare. The variance between those three will tell you what range makes sense for your specific project. Don’t leave stylize on default and wonder why results feel inconsistent.

Personalization Codes: What They Are and How to Use Them

Personalization is one of V8’s most practically useful features and one of the most misused.

How it works

When you rate images in MidJourney using the thumbs-up and thumbs-down options in the web interface, the platform builds a profile of your aesthetic preferences. It identifies patterns in what you respond to positively and what you reject. The --personalize parameter (or --p) applies that profile to your generations.

The practical effect: the model skews toward outputs that match your taste rather than its own defaults. If you consistently rate moody, high-contrast photography well and rate flat, overexposed images poorly, --p will push outputs toward that aesthetic automatically.

Building a useful personalization profile

The quality of your ratings matters more than the quantity. Rating images quickly without genuine engagement produces a noisy, inconsistent profile. Rate images you actually respond to — not just the ones that look technically polished, but the ones you’d genuinely want to produce.

You need to rate a meaningful number of images before personalization has enough signal to be useful. If --p isn’t producing noticeably different results yet, your profile needs more data before leaning on it.

Using personalization codes

Each user’s personalization profile generates a unique code. You can:

  • Apply your own code with --p [your-code]
  • Share your code with teammates so everyone’s outputs match the same aesthetic without each person needing their own rating history
  • Use codes published by other creators to borrow their taste profile on a specific project

The third point is practically useful for teams or anyone starting from scratch. Community forums and Discord servers regularly share personalization codes from creators with distinctive visual styles. Finding a code that aligns with your project’s direction can compress iteration time significantly.

Balancing personalization with stylize

Higher stylize values amplify personalization’s effect because the model has more creative latitude to incorporate your preferences. Lower stylize values constrain it. For a strong personal aesthetic signal, try --s 300 --p [your-code]. For a subtle preference nudge that doesn’t override your prompt, try --s 100 --p [your-code].

Style References and the Style Creator

Using —sref for consistent visual style

Style references let you provide an image as a visual style input, separate from your content prompt. V8 extracts aesthetic qualities — color palette, lighting approach, texture, compositional style — and applies them without copying the literal content of the reference image.

Practical use cases:

  • Maintaining a consistent visual style across a content series
  • Matching an established brand aesthetic
  • Applying the look of a reference image you own

The --sw (style weight) parameter controls how strongly the reference influences the output. Higher values push outputs closer to the reference style. Lower values treat it as a loose suggestion. Run the same prompt at --sw 50, --sw 100, and --sw 200 to find the balance that works for your reference.

Style seeds for reproducibility: If you generate an image with a style you want to replicate, find its style code in the MidJourney web app by right-clicking the image. Reusing that code in future prompts produces more consistent aesthetic results than re-describing the style in text — description introduces variation; codes don’t.

The Style Creator

The Style Creator is a dedicated tool in the MidJourney web interface that works differently from --sref. Instead of referencing an external image, you select from generated options to define a new style from scratch.

The process:

  1. Open the Style Creator in the MidJourney web app
  2. Generate a set of test images across different aesthetic variations
  3. Choose the ones that match your target visual direction
  4. The Style Creator generates a reusable style code based on your selections

That code applies to any future prompt. It functions like a personalization code but for a specific visual aesthetic rather than your general taste. It’s particularly useful for brand work or content series where you need to maintain a defined visual language across many images. Define the style once at the start of a project, generate a code, and apply it consistently rather than re-specifying aesthetic details in every prompt.

Combining style tools

You can stack style references and personalization codes: --sref [url] --p [code]. V8 handles these in parallel — the style reference shapes the aesthetic, and personalization adds your preference layer on top. If the reference is dominating too strongly, lower --sw. If personalization is pulling outputs too far from your reference, use a lower stylize value to reduce the model’s creative latitude overall.

Build Repeatable Image Workflows with MindStudio

Getting great individual images from V8 is one challenge. Turning that into a repeatable production process — especially for teams generating volume — is a different one.

MindStudio’s AI Media Workbench is a workspace for AI image and video production that brings all major generation models together in one interface. No setup, no separate API keys, no switching between tools. It includes access to FLUX, Veo, Sora, and other leading models alongside 24+ media tools for post-processing: upscaling, background removal, face swap, image compositing, subtitle generation, and more.

For teams doing regular image production, the more useful capability is workflow automation. You can build agents in MindStudio that:

  • Accept a content brief or product description as input
  • Expand it into a properly structured, V8-optimized prompt with lighting, lens, mood, and style details included
  • Generate image variations across multiple models for comparison
  • Apply consistent post-processing steps automatically
  • Route finished assets directly to Slack, Google Drive, Airtable, or wherever your team works

The prompt-expansion layer is particularly useful if your team includes people who need to generate images without learning prompt syntax. A non-technical team member submits a short description; the agent handles the prompt engineering required to get a good V8 output. Finished images land in the right place without manual hand-off.

You can also use MindStudio to generate variations across different models simultaneously — useful when you’re not sure whether V8, FLUX, or another model will best serve a specific brief.

MindStudio is free to start at mindstudio.ai. You don’t need to create separate accounts or manage API keys for individual image models.

Common Mistakes to Avoid in V8

Prompting like it’s V7. Keyword lists produce flat outputs in V8. If results look generic or boring, rewrite the prompt as a description. Add lighting, mood, composition, and camera details explicitly.

Leaving stylize at default for everything. The default stylize value is a starting point, not an answer. Test a range of stylize values for your specific output type and lock in what works.

Using personalization before building rating history. If you’ve only rated a small number of images, --p won’t produce meaningfully different results. Invest time in building a genuine rating history before depending on it.

Skipping draft mode during iteration. Full-quality V8 generations take time. Use draft mode to test directions quickly and run full renders only when a prompt is producing the right concept.

Stacking too many modifiers at once. V8 handles complex prompts better than V7, but combining multiple --sref URLs, a personalization code, a style code, high stylize, and ten style descriptors in a single prompt produces incoherent results. Add complexity one layer at a time and test at each step.

Treating alpha behavior as stable. V8 is in alpha. The model will change as MidJourney continues development. Document what’s working now and expect to revise prompting strategies as updates ship.

Frequently Asked Questions

Is MidJourney V8 available to all users?

V8 is in alpha as of 2025 and accessible through the MidJourney web interface. Subscribers on Standard, Pro, and Mega plans should have access, though some alpha features may still be rolling out. Check your account settings to confirm V8 is available and active for your plan.

What is the difference between MidJourney V7 and V8?

V8 produces more photorealistic outputs by default, handles text in images better, and follows complex prompts more literally than V7. V7 added artistic polish automatically; V8 requires you to specify it. The result is more precise control when you know what you want, but prompts that worked in V7 often need significant revision for V8.

How does the stylize parameter work in MidJourney V8?

The --stylize (or --s) parameter runs from 0 to 1000 and controls how much creative latitude the model applies. Low values (0–100) produce literal, accurate outputs. Mid-range values (200–400) add aesthetic polish and compositional choices. High values (600–1000) allow creative interpretation that may drift significantly from your prompt. For photorealistic work, lower stylize values (50–150) tend to perform best in V8 because the model’s baseline realism is already strong.

What are personalization codes and how do I use them?

Personalization codes are tied to your MidJourney image rating history. After rating a sufficient number of images, you can apply your aesthetic preferences to any generation using --personalize or --p [code]. You can share your code with teammates for consistent team output, or use codes published by other creators to apply their visual taste to your prompts.

How do I use the Style Creator in MidJourney V8?

The Style Creator is in the MidJourney web interface. Navigate to the Explore or Create section and find the Style Creator tool. Generate test images across different aesthetic variations, select the ones that match your target visual direction, and the tool generates a reusable style code. Apply that code to any prompt using --style [your-code] for consistent output across a project.

Why do my V7 prompts look different in V8?

V8 interprets prompts more literally and has a more photorealistic default aesthetic. If V7 outputs looked more artistic or painterly, that was the model adding polish automatically. V8 requires you to specify those qualities. Add lighting descriptions, mood language, composition details, and explicit style references to adapt V7 prompts for V8.

Key Takeaways

V8 rewards specificity and punishes vague prompts in ways V7 didn’t. Here’s the short version of what to change:

  • Specify lighting, composition, and mood explicitly. V8 will not supply these automatically. Be precise about what you want.
  • Test stylize values for your use case. Start at 50, 200, and 500, compare the results, and commit to a range that works for your output type.
  • Build your personalization profile before relying on it. Rate images you genuinely respond to, and wait until you have meaningful history before making --p part of your standard workflow.
  • Use draft mode for iteration. Reserve full-quality renders for prompts that are already working.
  • Use the Style Creator at project start. Define your visual language once, generate a code, and apply it consistently instead of re-specifying aesthetic details in every prompt.
  • Expect behavior to change. V8 is in alpha. Document what works now so you have a baseline when updates shift things.

If you’re generating image content at volume — for a team, a brand, or a content operation — MindStudio gives you a way to build repeatable workflows around V8 and other leading models. You can start free at mindstudio.ai.

Presented by MindStudio

No spam. Unsubscribe anytime.