Published February 8, 2026

Suno Studio Updated: Removing Effects and Flexible Tempo Control

Version 1.2 expands editing and audio capabilities in the Suno Studio generative workstation, providing users with more control over the final mix.

Products
Event Source: Suno Reading Time: 3 – 4 minutes

On February 6, 2026, Suno released an update for its Studio workstation – a tool available to Premier tier subscribers. Briefly put, Studio allows you to create music using generative AI directly within a workspace, eliminating the need to switch between different programs.

The new version 1.2 introduces several features that give more control over the final sound. Let's break down exactly what has changed.

Remove FX Feature for Clean Audio Export

Stripping Away Effects, Keeping the Sound Clean

One of the key new features is Remove FX. It performs a simple yet extremely useful task: it clears the audio clip of applied processing.

Imagine you've generated a vocal part, but it's saturated with reverb. This is acceptable for a sketch, but if you plan to work on the track in detail in your DAW (Digital Audio Workstation), excess effects just get in the way. Remove FX creates a version of the clip without processing – you can export it and choose suitable plugins yourself.

Warp Markers and Quantize for Timing Control

Warp Markers: When You Need to Fix the Rhythm

The second feature is called Warp Markers with Quantize. This is a tool for working with timing and «groove» – how exactly sounds are distributed over time.

Now you can double-click a clip and place markers manually where needed. Or use a faster method – automatically place them by transients (sharp bursts of sound, such as drum hits or the attack of a note). After that, the audio can be flexibly stretched or compressed in time.

Quantization has also appeared: if a recording «drifts» a bit in tempo, it can be rigidly snapped to the grid. This is convenient when you need to synchronize an off-beat part with the project's overall tempo.

Alternates: Convenient Work with Variations

The developers have updated the system for working with tracks and takes. Previously, switching between several versions of the same sound wasn't implemented in the most convenient way.

Now this function is called Alternates – sound variations can be auditioned right inside a single track. Simply put: you generated five versions of a bass line and can quickly switch between them to choose the best one. At the same time, the project remains structured, without a pile-up of extra tracks.

Support for Multiple Time Signatures in Suno Studio

Time Signatures Other Than 4/4

And finally: Studio now supports various time signatures, not just the standard 4/4.

You can create a project, for example, in 6/8 time, which is often found in ballads or waltzes. Or go further and experiment with complex signatures like 7/8 or 11/4 if you work in progressive rock or math rock genres.

For those used to a standard rhythm, this might seem like a niche option. However, for professional work and genre experiments, this is a fundamentally important capability.

Where Studio Is Heading

The Suno team states that their goal is to give professionals more control tools inside Studio so that the path from idea to finished result becomes shorter and simpler.

Judging by the update, Studio is gradually turning from a simple generator into a full-fledged editing and mixing tool. The program doesn't just create content but also allows you to «polish» the generation result.

At the moment, all new features are available only to Premier subscribers.

Original Title: What's New in Suno Studio 1.2
Publication Date: Feb 7, 2026
Suno suno.com A U.S.-based platform for AI-generated music and song creation.
Previous Article How AMD GPUs Accelerate Graph Visualization – And Where AI Fits In Next Article Sarvam Dub: Automatic Dubbing for Indian Languages

From Source to Analysis

How This Text Was Created

This material is not a direct retelling of the original publication. First, the news item itself was selected as an event important for understanding AI development. Then a processing framework was set: what needs clarification, what context to add, and where to place emphasis. This allowed us to turn a single announcement or update into a coherent and meaningful analysis.

Neural Networks Involved in the Process

We openly show which models were used at different stages of processing. Each performed its own role — analyzing the source, rewriting, fact-checking, and visual interpretation. This approach maintains transparency and clearly demonstrates how technologies participated in creating the material.

1.
Claude Sonnet 4.5 Anthropic Analyzing the Original Publication and Writing the Text The neural network studies the original material and generates a coherent text

1. Analyzing the Original Publication and Writing the Text

The neural network studies the original material and generates a coherent text

Claude Sonnet 4.5 Anthropic
2.
Gemini 3 Pro Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 3 Pro Google DeepMind
3.
Gemini 3 Flash Preview Google DeepMind Text Review and Editing Correction of errors, inaccuracies, and ambiguous phrasing

3. Text Review and Editing

Correction of errors, inaccuracies, and ambiguous phrasing

Gemini 3 Flash Preview Google DeepMind
4.
Gemini 3 Flash Preview Google DeepMind Preparing the Illustration Description Generating a textual prompt for the visual model

4. Preparing the Illustration Description

Generating a textual prompt for the visual model

Gemini 3 Flash Preview Google DeepMind
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image based on the prepared prompt

5. Creating the Illustration

Generating an image based on the prepared prompt

FLUX.2 Pro Black Forest Labs

Related Publications

You May Also Like

Explore Other Events

Events are only part of the bigger picture. These materials help you see more broadly: the context, the consequences, and the ideas behind the news.

Mistral AI has unveiled Voxtral – a real-time speech transcription model featuring precise speaker separation and a new interactive «sandbox» for audio workflows.

Mistral AImistral.ai Feb 6, 2026

The teams behind the Hailuo AI video generator and the fal platform held a joint meeting in Istanbul. Developers showcased their technologies' capabilities and demonstrated how the synergy between their tools simplifies complex creative tasks.

MiniMaxwww.minimax.io Feb 7, 2026

Want to dive deeper into the world
of neuro-creativity?

Be the first to learn about new books, articles, and AI experiments
on our Telegram channel!

Subscribe