Skip to main content
MindStudio
Pricing
Blog About
My Workspace
MidjourneyImage GenerationPrompt Engineering

MidJourney V8 Style Creator: How to Use Visual Style Exploration for Better Results

MidJourney V8's Style Creator lets you explore aesthetics visually instead of prompting. Learn how to use it to get consistent, stylized outputs.

MindStudio Team
MidJourney V8 Style Creator: How to Use Visual Style Exploration for Better Results

Why Describing Visual Style in Words Usually Fails

If you’ve spent time with MidJourney, you’ve probably hit this wall: you have a clear aesthetic in mind, but no matter how cleverly you phrase your prompt, the output doesn’t match what’s in your head. Words like “cinematic,” “moody,” or “painterly” are vague. They mean different things to different people — and to different models.

MidJourney V8’s approach to style exploration addresses this directly. Instead of asking you to describe an aesthetic with language, it lets you see your options and pick what resonates. The result is faster iteration, more consistent outputs, and a much shorter feedback loop between what you imagine and what you generate.

This guide covers exactly how to use MidJourney V8’s visual style tools — the Style Tuner, style reference images, and style codes — to build and apply consistent aesthetics across your image generation work.


What MidJourney V8 Brings to Style Control

MidJourney V8, released in 2025, represents a significant step up in output quality and prompt fidelity. But beyond raw image quality improvements — better lighting, more accurate anatomy, stronger text rendering — V8 also refines the style toolset that’s been building since V6.

The key style features you’ll work with in V8 include:

  • Style Tuner — A visual questionnaire that generates a personalized style code from your aesthetic preferences
  • Style Reference (--sref) — A parameter that uses an image as a style guide, not a subject guide
  • Style Weight (--sw) — Controls how strongly the style reference influences your output
  • Personalization (--p) — A profile-level setting where MidJourney learns from images you’ve rated or liked

Together, these tools move style control out of the prompt and into a more visual, iterative process. V8’s improved coherence between prompt intent and visual output makes all of these more reliable than in previous model versions.


How the Style Tuner Works

The Style Tuner is the most distinctive of MidJourney’s style tools. It generates a visual grid of aesthetic options based on your prompt, you select your preferences, and it produces a unique style code you can reuse.

Step 1: Start the Tuner with Your Prompt

In the MidJourney bot (Discord or the web app), type:

/tune prompt: [your base prompt]

For example:

/tune prompt: portrait of a woman in a city at dusk

You’ll be asked to choose how many style direction pairs to evaluate — options are typically 16, 32, 64, or 128. More pairs give the system more data about your preferences and produce a more refined style code, but take longer and use more GPU credits.

For most use cases, 32 pairs is a solid middle ground.

Step 2: Make Your Selections

MidJourney generates a set of image pairs. Each pair shows the same prompt rendered in two different aesthetic directions. Your job is to pick which one appeals more to you — not which is “better” in some abstract sense, but which matches the look you’re going for.

The choices cover a wide range: color temperature, contrast, rendering style, texture, lighting approach, and overall mood. You don’t need to analyze them analytically. Just pick what looks right.

This visual selection process is exactly what makes the Style Tuner more effective than text descriptions. You’re not trying to articulate “slightly desaturated with film grain and directional window light.” You’re just pointing at images and saying yes or no.

Step 3: Get Your Style Code

After you complete all the pairs, MidJourney generates a unique alphanumeric style code — something like --style a1b2c3d4. This code encodes your aesthetic preferences based on your selections.

Apply it to any future prompt using the --style parameter:

a woman walking through a rainy street --style a1b2c3d4

The style code overrides MidJourney’s default stylization and applies the specific aesthetic fingerprint you built through your selections.

Sharing and Reusing Style Codes

Style codes are portable. You can:

  • Save codes and reuse them across different prompts
  • Share codes with collaborators so everyone’s working from the same aesthetic baseline
  • Create multiple codes for different visual moods (one for editorial photography, one for illustration, etc.)

This is especially useful for brand or project consistency — a team can align on a single style code instead of each person writing different variations of the same adjectives.


Using Image References for Style with --sref

The --sref (style reference) parameter takes a different approach. Instead of generating style options for you to pick from, it extracts the aesthetic of an existing image and applies it to your prompt.

This is not the same as image-to-image generation. --sref doesn’t try to reproduce the content of your reference — it reads the visual qualities: color palette, texture, lighting mood, compositional feeling. Your prompt still controls what’s depicted.

Basic Usage

a mountain landscape at sunrise --sref [image URL]

MidJourney reads the style of your reference image and uses it to inform the aesthetic of the output. The reference can be anything: a photo, a painting, a screenshot, a film still.

Using Multiple References

You can pass multiple image URLs to blend styles:

a portrait of a young man --sref [url1] [url2]

MidJourney will blend the aesthetic qualities of both references. This is useful when no single reference image captures exactly what you want — you might pull warm tones from one and a specific textural quality from another.

Controlling Style Strength with --sw

The --sw parameter (style weight) lets you dial how strongly the reference influences the output. The range is 0 to 1000, with the default at 100.

  • Low values (0–50): Reference has a subtle influence; prompt has more control
  • Default (100): Balanced influence
  • High values (300–1000): Reference aesthetic dominates; very strong stylistic match
a woman in a coffee shop --sref [url] --sw 300

Experiment with this. If your reference is very distinctive (like a specific artist’s style), high --sw values can produce striking results. If you’re using a softer aesthetic reference, lower values may be enough.

--sref random

If you don’t have a specific reference in mind but want to explore, --sref random generates a random style seed. Each variation uses a different aesthetic interpretation of your prompt.

a cityscape at night --sref random

Run this multiple times, note the seeds from outputs you like, and use those specific seeds to reproduce that look in future generations.


Building Consistency Across a Project

One of the practical challenges with AI image generation is maintaining a coherent look across multiple images. If you’re building a set of images for a campaign, editorial piece, or product line, you want aesthetic consistency — not a random collection of individually interesting images.

MidJourney’s style tools make this achievable.

Define Your Style Code First

Before you generate any final images for a project, run the Style Tuner with a representative prompt and complete the selection process. Treat this as part of your pre-production work. The style code you generate becomes your project’s visual standard.

Lock In with --sref + Style Code

You can combine --sref with a style code from the Tuner:

[prompt] --sref [url] --style [code]

The style code acts as a foundation while --sref adds specific reference qualities. This combination gives you tight control over the output aesthetic.

Document Your Parameters

Keep a running note of the style codes, sref URLs, and --sw values that produce results you like. It sounds obvious, but without this habit you’ll spend time re-discovering settings you’ve already found.

A simple shared document with columns for style code, sref source, sw value, and example outputs works well for team projects.


MidJourney V8’s Personalization Feature

Beyond project-level style codes, MidJourney V8 has a personalization system that learns from your history across the platform. When you rate images (using the like/dislike functionality) or repeatedly return to certain outputs, the system builds a model of your aesthetic preferences.

To apply your personalization profile, add --p to your prompt:

a forest path in autumn --p

This is different from the Style Tuner in that it’s not prompt-specific — it’s a reflection of your broader taste across everything you’ve generated or rated. The more you interact with images in the platform, the more refined your personalization becomes.

Personalization works best as a baseline. For project-specific consistency, pair it with a style code or sref.


Tips for Getting Better Style Results

Match Your Tuner Prompt to Your End Use Case

When running the Style Tuner, the prompt you use to generate the style pairs affects the style code you end up with. A style code generated from “a portrait of a woman in natural light” will behave differently when applied to a landscape than a code generated from “a mountain scene.”

The closer your tuner prompt is to the type of images you’re actually generating, the better the style code will transfer.

Use Real-World References for --sref

Images from the real world — photographers’ portfolios, film stills, artwork — tend to produce more coherent style extractions than AI-generated images. AI images can carry artifacts or stylistic inconsistencies that confuse the reference process.

If you’re building a style around a specific aesthetic (say, 1970s documentary photography), look for high-quality scans or clean digital reproductions of real work.

Keep Your Prompt Focused When Style Weight Is High

When using high --sw values, the style reference does a lot of heavy lifting. This means your prompt needs to be clear and specific to still have influence over the content. Vague prompts + high style weight = outputs that look great but don’t depict what you wanted.

Generate Multiple Variations Before Committing

For any important project, generate at least 4–8 variations with your style settings before deciding the look is locked. V8’s outputs can vary meaningfully across a single prompt. Run the prompt several times, compare, and adjust --sw or sref if needed.


Common Mistakes to Avoid

Using too many style modifiers at once. Stacking --style, --sref, --p, and heavy stylize values together creates competing signals. Start simple — one style tool at a time — then add complexity.

Applying style codes to completely different subject matter. A style code trained on architectural photography will behave strangely when applied to close-up food photography. Style codes are context-sensitive. Build separate codes for meaningfully different image types.

Ignoring --sw and leaving it at default. The default style weight of 100 is a reasonable middle ground, but many sref use cases benefit from adjustment. Get in the habit of testing at least two sw values before settling.

Using low-resolution or low-quality sref images. The quality of your reference matters. A blurry or heavily compressed image will produce muddier style extractions. Use the cleanest version of any reference you can find.

Treating the Style Tuner as a one-time setup. As your projects evolve, generate new style codes. Your taste changes, project needs shift, and V8’s improved quality means older codes may not take full advantage of the model’s capabilities.


Scaling Visual Workflows Beyond MidJourney

If you’re doing regular volume image work — content for multiple clients, a consistent publishing schedule, product visualization at scale — doing everything manually in MidJourney eventually creates a bottleneck.

This is where MindStudio’s AI Media Workbench becomes relevant. It’s a dedicated workspace for AI image and video production that gives you access to multiple image generation models — including FLUX — in one place, without needing separate accounts or API keys.

More importantly, MindStudio lets you chain image generation into automated workflows. You can build an agent that takes a creative brief (a text description, a client input form, or data pulled from a project management tool), feeds it through an image generation pipeline, applies post-processing steps like upscaling or background removal, and delivers outputs to where they need to go — all without manual steps in between.

The 24+ media tools included in the Workbench handle the kinds of tasks that often create friction after generation: face swap, upscaling, subtitle generation, background removal, clip merging. You can string these together without custom code.

For teams that have already defined their visual style — using MidJourney’s style codes or reference images as their aesthetic foundation — MindStudio provides the infrastructure to produce at scale while keeping humans in the loop only where judgment matters.

You can try MindStudio free at mindstudio.ai and explore the AI Media Workbench without any setup or downloads.


Frequently Asked Questions

What is the MidJourney Style Tuner and how is it different from --sref?

The Style Tuner is a preference-based tool: you answer a visual questionnaire by selecting which images in a series of pairs match your taste, and MidJourney generates a style code based on your choices. The --sref parameter is reference-based: you provide an image URL and MidJourney extracts the aesthetic qualities from it. The Tuner generates your style from scratch; --sref borrows style from existing imagery. Both produce consistent outputs, but the methods suit different situations — use the Tuner when you want to define a new aesthetic, and sref when you have a specific visual reference you want to match.

Can I share MidJourney style codes with other users?

Yes. Style codes are alphanumeric strings that anyone can use by appending --style [code] to their prompt. You don’t need to have created the code yourself. This makes them useful for teams, where one person runs the Tuner process and shares the resulting code so everyone works from the same aesthetic baseline.

How many GPU credits does the Style Tuner use?

The credit cost depends on how many style pairs you choose to evaluate. Larger sessions (64 or 128 pairs) use significantly more credits than smaller ones (16 or 32). For most projects, 32 pairs provides a good balance between refinement and cost. Check MidJourney’s current credit documentation for exact usage, as this can vary by subscription tier.

Does --sref copy the content of the reference image?

No. The --sref parameter is designed to extract aesthetic qualities — color palette, tone, texture, lighting mood — without reproducing the subject matter of the reference image. Your prompt continues to control what’s depicted in the output. If you want to strongly influence composition or subject, that requires a different approach (like image prompting with a reference URL at the start of your prompt).

What is --sw and when should I adjust it?

--sw controls how strongly the style reference influences the output, on a scale from 0 to 1000 (default 100). At low values, the reference has a subtle effect and your text prompt has more control. At high values, the reference aesthetic dominates. Increase --sw when your reference has a strong, distinctive look you want to closely match. Lower it when you want the reference to add a light tonal quality without overriding your prompt.

Will my MidJourney style code work the same in V8 as it did in earlier versions?

Possibly not. Style codes are model-version dependent. A code generated using V6 may produce different results in V8 because the underlying model has changed. For best results with V8, generate new style codes using the V8 model rather than importing codes from earlier versions.


Key Takeaways

  • MidJourney V8’s style tools — Style Tuner, --sref, --sw, and personalization — let you define and apply visual aesthetics without relying entirely on text descriptions.
  • The Style Tuner works through visual preference selection and produces reusable, shareable style codes.
  • --sref extracts aesthetic qualities from existing images; combine it with --sw to control how strongly the reference influences the output.
  • Style codes are project assets — document them, share them with collaborators, and generate new ones for meaningfully different image types.
  • For teams producing image content at volume, tools like MindStudio’s AI Media Workbench can extend these workflows into automated pipelines that scale without adding manual steps.

The shift from text-first to visual-first style exploration is one of the more practical improvements in modern image generation. Once you build the habit of defining your style before generating at scale, both quality and consistency improve significantly.

Presented by MindStudio

No spam. Unsubscribe anytime.