Adobe Sensei
Image courtesy: Adobe

Sensei is an interesting word. Borrowed from Japanese, in modern vernacular, it typically refers to an instructor and, more specifically, a martial arts instructor. Historically, it’s referred to teachers and facilitators across several disciplines or people of a certain stature in a relationship. Sensei, as Adobe uses it, is both the teacher and the student, learning from data and performing tasks.

In this article, we’ll discuss what Adobe Sensei is, what it’s used for and how it works to help empower video editors.

What is Adobe Sensei?

Adobe Sensei is Adobe’s Artificial Intelligence (AI) and Machine Learning (ML) technology that operates across the Adobe platform, including Adobe Experience Cloud, Creative Cloud and Document Cloud. While Adobe Sensei does a lot across Adobe’s catalog, video editors encounter it most in programs like Adobe Premiere Pro and After Effects.

Advertisement

Ultimately, Adobe Sensei uses both AI and ML technology to simplify and automate complex tasks that would take tons of energy and time if done manually. For example, Adobe Sensei can automatically rotoscope video.

It’s everywhere

Adobe Sensei has offered incredible value to editors working within the Adobe platform. Let’s look at a handful of effects and abilities Sensei offers video editors in Creative Cloud.

Content-Aware Fill in After Effects

Adobe Sensei in After Effects offers a tool called Content-Aware Fill, which allows you to quickly remove unwanted objects from a scene. This effect has existed in Photoshop for some time — also powered by Sensei. However, in After Effects, it removes objects in video footage.

Adobe After Effects’ Content-Aware Fill. Image courtesy: Adobe

Adobe Sensei tools in Audition

Adobe Audition includes a couple of new Sensei-powered features. The first is Remix, which will automatically fit the music of your choice to a length that you specify. Imagine setting the music to the length of your sequence instead of searching for tracks long enough to do the job or having to chop up tracks to shorten them. The second Sensei-powered feature in Audition is Auto-Ducking. This tool automatically — and dynamically — adjusts the volume of music or dialogue when there are other audio elements present. To do this, Adobe Sensei will analyze the audio you specify to figure out where the sound should be adjusted for a perfect mix.

Capture Mobile

In Capture Mobile, Sensei makes things like font recognition possible. Sensei can automatically identifies the font type of text in an image. To do this, the text needs to be clearly legible so it can be sent to a pre-trained neural network. It will then automatically deliver the font information to the user.

Character Animator

Character Animator puts Sensei to work with Auto Lip-Sync. Sensei matches recorded voices and motions recorded from webcams with an animated character/puppet. This essentially allows you to act out the exact movements and expressions you want your animated characters to make with your webcam. That’s pretty awesome.

But what about us video editors?

Adobe Sensei does a lot in Premiere Pro. Let’s take a look at some of what it can do.

Color match: This tool automatically matches the color consistency of key elements in a frame, saving the editor from having to manually tweak color settings.

Auto-Ducking: While very similar to the tool found in Audition, this one can be caused on your audio clips without leaving Premiere Pro. To recap, Auto-Ducking automatically adjusts the volume level on a clip you specify to accommodate other audio elements.

Morph Cut: Using Morph Cut allows you to deliver polished interviews by smoothing out jump cuts between soundbites in talking head sequences. Pro tip: It’s good practice to use Morph Cut on talking head clips that were shot using a fixed camera. The interpolation between clips can look a little funky if the background differs between the two shots being morphed.

Auto-Classification: This uses Sensei to automatically apply tags, like contextual metadata, to audio content. Once tagged as being primarily dialogue, music or other, Rush adjusts effects and options available for that clip.

Auto-Creation: Auto-Creation follows along the same path as Auto-Classification. Sensei uses user tags or auto-generated content tags to automatically build creations such as slideshows or collages.

Auto Reframe: Auto Reframe can analyze that wide footage and keep important subjects in the frame. It does this by dynamically changing the aspect ratio of the video content to keep your objects centered.

Adobe Stock

Adobe Sensei doesn’t just work within Adobe’s video applications; it also works inside Adobe Stock.

First, it powers the platform’s Visual Search, a handy tool to quickly find stock images that are similar to another image. Depth of Field is another intelligent search feature that helps find images that specifically focus on the subject while blurring the other elements of the photograph.

More tools

More Sensei-powered search features include Vivid Colors, which controls whether search results will be bright and colorful or dark, cool and muted. Similarly, Copy Space will filter images based on the actual space in an image for text to be added. Including both natural and stage locations, this feature will be a massive time-saver for video bloggers and creative editors alike.

Image courtesy: Adobe

Other creative tools that we don’t cover here are Behance, Colour Service, Dimension, Fonts, Fresco, Illustrator, InDesign, Photoshop, Photoshop Lightroom and Spark. However, the innovations in those applications are just as groundbreaking as the ones covered here. To learn more about Adobe Sensei in Adobe Creative Applications, check out Adobe’s Sensei page.

A tool for the future

Honestly, this article could fill an entire magazine. Adobe Sensei is so much more than a singular tool or block of code. It’s an entirely new way of approaching software development for Adobe. By placing this AI and machine learning intelligence layer between us and our end results, both our tools and our content gets exponentially better far more quickly. And it’s not only the creative community benefitting from this change in approach. Sensei is helping Adobe make tremendous leaps forward in their Document Cloud and Experience Cloud applications. It’s also helping Adobe make massive breakthroughs in their own research and innovation.

The enormity of the implementation of Sensei really cannot be overstated. The first few years of Adobe Sensei have seen show up in the bulk of Adobe’s product line, and the next few years should prove that the improvements to the user experience continue at an accelerating rate.

To sum up, it’s safe to say that the addition of AI and machine learning in Adobe creative applications helps all of us create better content more quickly. With better tools, creators can be more efficient than ever before. While Adobe Sensei has been a growing tool in Adobe’s products for a number of years, it’s safe to say that this is still just the tip of the iceberg. We don’t know yet all the ways machine learning and artificial intelligence will drive creative endeavors in software. Sensei and AI technology is driving forward a new way to work in the video industry. We are no longer living in a time when AI is coming; it’s already here.