Precision Talent

Loading

Blog

I Tried to Find the ‘Arousal Intelligence’ In An Animated, Augmented Reality Porn Star


I Tried to Find the ‘Arousal Intelligence’ In An Animated, Augmented Reality Porn Star

Sometimes people—especially those in the field of public relations doing a pray-and-spray campaign, but also small-time developers, the occasional delusional vibe-coder, and local dipshits—deliver messages to my inbox like a cat dropping a dead mouse on my doorstep. For the most part, I resist the bait: often, bad press is still press to these people, or I’m just too busy to really look at the pitch or try the product. 

This week, I’m coming back from a week of being entirely offline. I didn’t look at the news or my inboxes for seven straight days. I’m feeling properly healed, and also like I need to retraumatize myself back into the swing of things. Lucky me, on Monday morning, someone representing EnjoyMeNow emailed me about “a mobile website that places a photorealistic 3D character in your real room using augmented reality” using something called “Arousal Intelligence” and “real-time physics,” which streams “in a full engine from a global delivery network.” This press release, sent from “a globally focused media and entertainment holding company pioneering technology-driven innovation across digital platforms worldwide” called DCBG Group which represents EnjoyMeNow, was very thrilling to read as someone who appreciates the art of a good word salad. I dropped what I was doing (deleting hundreds of other emails) to try it out. 

Once on the EnjoyMeNow.com mobile site, after agreeing that you’re over 18, you’re asked to choose a “Pleasurette™,” a gender neutral term for a series of 3D characters and a trademark filed two weeks ago. These include five women wearing sex toy store package lingerie, and one dude, Adrian. 

“Every character—called a Pleasurette™—is a photorealistic digital human built from scratch with realistic skin shading, multi-pass rendered hair, and soft-body physics. No real performers are filmed, recorded, or motion-captured. The characters are created entirely in 3D software.” Presented without comment are the Pleasurettes™:

I Tried to Find the ‘Arousal Intelligence’ In An Animated, Augmented Reality Porn Star

I choose Adrian first because I’m always curious how AR and VR porn copes with the fact that hovering pecs and an immobile penis are difficult to make sexy in this format, real or not. A lot of porn made for a VR or AR experience is shot from the penile point of view: It’s just easier to strap a 180 degree HD camera to a man’s face and tell him to hold still while a female performer is free to writhe around on top than vice-versa. Knowing this, and also knowing that the market for AR/VR porn caters heavily toward men (save for a few beacons of light, such as director Anna Lee, who a few years ago said of the proliferation of male-gaze VR porn: “You’re making the same stereotypical porn you made with a fucking camcorder. It’s the same MILF bending over in the kitchen to bake cookies”), I still went in hopeful. After all, they pitched me

But it became clear almost immediately that Adrian is not playing for my team, so to speak, and getting the full EnjoyMeNow experience as intended requires equipment I don’t have. To get your chosen Pleasurette™ into your camera’s view, you have to hold your phone at an angle toward your crotch and stroke your penis. Helpfully, since I don’t have one of those, the app overlays a semi-transparent image of a penis at the bottom of the camera. It waits for you to put your hand in frame near the penis-guide to let the show begin. Moving my hand across the camera unlocks the start button. It’s not doing this to make sure you’re choked up on it before starting; It’s calibrating the position of the 3D model to your hand’s location and size, because that’s what controls its interactive aspects. 



Without getting too graphic in a blog that’s already pretty explicit so far, this is what I encountered: Adrian walks into view totally nude, leading with his 3D dick at a 90 degree angle, and says “look up, here I come.” Tearing my eyes away from this perfectly straight tree branch and pointing the phone camera up as commanded, with more than a little trepidation, I see the jiggliest pair of male titties I’ve ever seen on screen, nipples wobbling independently of the rest of him. “Stroke back and forth your big dick,” he says, grammatically confounding me on top of already freaking me out with a thousand yard stare. When I make a jerkoff motion in his general direction, he squats up and down like he’s teabagging me in Halo. Bizarrely, when I do this, his entire body shrinks, my hand now a monstrous size in comparison to his penis. No judgement, but he moans in a woman’s voice. “Come on my back soon,” he says, before a screen interrupts the session saying I need to pay $2.99 to unlock more features, such as making my Pleasurette™ orgasm. (For the record, I tried two payment methods to fork over this low low price, both rejected.) The experience is the same with the other characters, just in different skins: the female characters crawl around and squat over my ghost penis, and I use my imagination to jerk it off, which ends up looking like I’m fistbumping tiny 3D women in the vagina. Sometimes, I clip through their hollow bodies and can see straight up into their heads or down through their labia.



0:00

/1:26





EnjoyMeNow’s PR rep claims that this interactivity is a world first. “Existing AR adult content is pre-rendered video or static models you look at,” they told me. “EnjoyMeNow is interactive, where the character responds to your hand in real-time, placed in your actual room through your phone camera. And it runs entirely in the mobile browser. No app, no download, no account. That combination doesn’t exist anywhere else from our research over the past year of creating this.” 

Companies like SexLikeReal and Naughty America have been doing AR and VR content for years, often featuring real porn performers. But this hand-tracking thing EnjoyMeNow is doing is different than that, they claim. And I’ll concede, yes, moving your hand up and down definitely makes the 3D model move around a little bit. Here’s how one of the femme characters acts:



0:00

/1:29





What really makes EnjoyMeNow stand apart from plenty of other AR porn products is this insistence that not employing real models or performers makes it better or smarter, somehow. On Monday, the DCBC Group’s website said of the choice to use CGI instead of people: “This was a founding decision, not a technical workaround. The adult entertainment industry has always relied on real people putting their bodies in front of a camera—and that comes with real consequences. Exploitation, coercion, content leaked without consent, performers pressured into work they’re uncomfortable with, and careers that follow people for the rest of their lives whether they want them to or not. We chose to build a platform where none of that is possible. Every character on EnjoyMeNow is created entirely in software. No one is filmed. No one is exploited. No one’s livelihood depends on what they’re willing to do on camera. The experience is just as immersive—and no real person is harmed or compromised in the process.”

The idea that the adult industry—and “putting bodies in front of a camera”—is inherently exploitative is not only false, it’s a harmful thing to say, and it’s especially galling coming from a literal porn web toy. This entire statement is so infuriating it’s hard to know where to begin with it. These are talking points used by the most conservative, anti-porn lobbying groups and politicians on the planet to justify stripping us all of rights, here being floated by an app that makes weird, schlocky and unsatisfying 3D characters that the residents in Second Life’s least-attended sex clubs wouldn’t even find sexy.

But again, because I had the time and was feeling fresh, I asked DCBC Group to defend this statement with some data at least. “We’re not making a judgment about the adult industry or its performers,” they said. “We built a product around CGI characters, that’s a format choice, not a moral position. Some people prefer content that doesn’t involve real people. We built for them. We’ve now updated our press page to better reflect that; thank you Sam for that observation.” The page now says “EnjoyMeNow is built around computer-generated characters rather than real performers. This is a format choice—offering a new kind of private, interactive experience that doesn’t exist in traditional adult content.” Good for them for changing it.

And since users are being asked to position their dongs in front of their phone cameras on a browser-based app, I took a look at the “privacy” section of the FAQ. “Privacy is architectural, not a policy bolt-on. No app is installed. No account is required,” DCBC wrote. “All camera and motion processing runs locally on the phone—no frames, no images, no data ever leave the device. There is no cloud processing, no recording, and no persistent data stored after the session ends. When you close the tab, the adult content is automatically purged from the browser.” 

I asked DCBC’s rep if they could elaborate. Well, they could at least throw more words at it: “Regarding content encryption, every 3D asset is individually encrypted at the file level, stored encrypted, transmitted encrypted, and only decrypted at render time using per-session keys that never touch the device,” they said. “There are no downloadable model files. This is a custom content protection system built specifically to prevent our CGI assets from being extracted, redistributed or changed. The specifics are proprietary, but it goes well beyond transport-layer encryption. One core goal of this architecture is ensuring no one can upload their own content to the platform. This is a closed system by design.” 

“Just needless words really,” 404 Media’s privacy and security reporter Joseph Cox said about this when I showed him what DCBC said. It could easily be cut down to “we don’t allow uploads.” Which is, to be clear, for the best.

I should say here that I don’t go into these sorts of reviews assuming that I am the target audience. I’m pitched regularly by porn sites and sex toy companies on products that aren’t my personal thing; I wrote a column for years about kinks and fetishes that are not many people’s thing at all, but I wanted to better understand them and what appeal they hold for the people who love them. Maybe there are people out there who simply cannot consume content with real people in it; if that’s you, please hit me up, I would really like to hear more about that.

Evaluating the ethics of autonomous systems

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.   

The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.

“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.

Evaluating ethics

In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.

Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.

Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.

For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria may not be well-specified, so they can’t be measured analytically.

The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.

“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.

The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.

“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.

SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.

In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.

For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.

To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.

“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.

In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.

This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.

Preview tool helps makers visualize 3D-printed objects

Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.

But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.

To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.

Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.

The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.

Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.

“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.

She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Accurate aesthetics

The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.

Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.

VisiPrint uses two AI models that work together to overcome those challenges.

The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.

From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.

It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.

The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.

Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.

“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.

A user-focused system

The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.

The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.

In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.

To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.

In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.

“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.

In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.

“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.

“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.

This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.

MIT researchers use AI to uncover atomic defects in materials

In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more.

But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.

Now, MIT researchers have built an AI model capable of classifying and quantifying certain defects using data from a noninvasive neutron-scattering technique. The model, which was trained on 2,000 different semiconductor materials, can detect up to six kinds of point defects in a material simultaneously, something that would be impossible using conventional techniques alone.

“Existing techniques can’t accurately characterize defects in a universal and quantitative way without destroying the material,” says lead author Mouyang Cheng, a PhD candidate in the Department of Materials Science and Engineering. “For conventional techniques without machine learning, detecting six different defects is unthinkable. It’s something you can’t do any other way.”

The researchers say the model is a step toward harnessing defects more precisely in products like semiconductors, microelectronics, solar cells, and battery materials.

“Right now, detecting defects is like the saying about seeing an elephant: Each technique can only see part of it,” says senior author and associate professor of nuclear science and engineering Mingda Li. “Some see the nose, others the trunk or ears. But it is extremely hard to see the full elephant. We need better ways of getting the full picture of defects, because we have to understand them to make materials more useful.”

Joining Cheng and Li on the paper are postdoc Chu-Liang Fu, undergraduate researcher Bowen Yu, master’s student Eunbi Rha, PhD student Abhijatmedhi Chotrattanapituk ’21, and Oak Ridge National Laboratory staff members Douglas L Abernathy PhD ’93 and Yongqiang Cheng. The paper appears today in the journal Matter.

Detecting defects

Manufacturers have gotten good at tuning defects in their materials, but measuring precise quantities of defects in finished products is still largely a guessing game.

“Engineers have many ways to introduce defects, like through doping, but they still struggle with basic questions like what kind of defect they’ve created and in what concentration,” Fu says. “Sometimes they also have unwanted defects, like oxidation. They don’t always know if they introduced some unwanted defects or impurity during synthesis. It’s a longstanding challenge.”

The result is that there are often multiple defects in each material. Unfortunately, each method for understanding defects has its limits. Techniques like X-ray diffraction and positron annihilation characterize only some types of defects. Raman spectroscopy can discern the type of defect but can’t directly infer the concentration. Another technique known as transmission electron microscope requires people to cut thin slices of samples for scanning.

In a few previous papers, Li and collaborators applied machine learning to experimental spectroscopy data to characterize crystalline materials. For the new paper, they wanted to apply that technique to defects.

For their experiment, the researchers built a computational database of 2,000 semiconductor materials. They made sample pairs of each material, with one doped for defects and one left without defects, then used a neutron-scattering technique that measures the different vibrational frequencies of atoms in solid materials. They trained a machine-learning model on the results.

“That built a foundational model that covers 56 elements in the periodic table,” Cheng says. “The model leverages the multihead attention mechanism, just like what ChatGPT is using. It similarly extracts the difference in the data between materials with and without defects and outputs a prediction of what dopants were used and in what concentrations.”

The researchers fine-tuned their model, verified it on experimental data, and showed it could measure defect concentrations in an alloy commonly used in electronics and in a separate superconductor material.

The researchers also doped the materials multiple times to introduce multiple point defects and test the limits of the model, ultimately finding it can make predictions about up to six defects in materials simultaneously, with defect concentrations as low as 0.2 percent.

“We were really surprised it worked that well,” Cheng says. “It’s very challenging to decode the mixed signals from two different types of defects — let alone six.”

A model approach

Typically, manufacturers of things like semiconductors run invasive tests on a small percentage of products as they come off the manufacturing line, a slow process that limits their ability to detect every defect.

“Right now, people largely estimate the quantities of defects in their materials,” Yu says. “It is a painstaking experience to check the estimates by using each individual technique, which only offers local information in a single grain anyway. It creates misunderstandings about what defects people think they have in their material.”

The results were exciting for the researchers, but they note their technique measuring the vibrational frequencies with neutrons would be difficult for companies to quickly deploy in their own quality-control processes.

“This method is very powerful, but its availability is limited,” Rha says. “Vibrational spectra is a simple idea, but in certain setups it’s very complicated. There are some simpler experimental setups based on other approaches, like Raman spectroscopy, that could be more quickly adopted.”

Li says companies have already expressed interest in the approach and asked when it will work with Raman spectroscopy, a widely used technique that measures the scattering of light. Li says the researchers’ next step is training a similar model based on Raman spectroscopy data. They also plan to expand their approach to detect features that are larger than point defects, like grains and dislocations.

For now, though, the researchers believe their study demonstrates the inherent advantage of AI techniques for interpreting defect data.

“To the human eye, these defect signals would look essentially the same,” Li says. “But the pattern recognition of AI is good enough to discern different signals and get to the ground truth. Defects are this double-edged sword. There are many good defects, but if there are too many, performance can degrade. This opens up a new paradigm in defect science.”

The work was supported, in part, by the Department of Energy and the National Science Foundation.

Seeing sounds

As one of the first students in MIT’s new Music Technology and Computation Graduate Program, Mariano Salcedo ’25 is researching the intersection between artificial intelligence and music visuals.

Specifically, his graduate research focuses on neural cellular automata (NCA), which merges classical cellular automata with machine learning techniques to grow images that can regenerate.

When paired with a stimulus like music, these images can “show” sounds in action.

“This approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems,” Salcedo says. Through the web interface Salcedo has designed, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances using any music audio stream.

“I want the visuals to complement and elevate the listening experience,” he says.

Last year Salcedo, the Alex Rigopulos (1992) Fellow in Music Technology and Computation, earned a BS in artificial intelligence and decision making from MIT, where he explored signal processing in machine learning and how a classical understanding of signals can inform how we understand AI. Now he’s one of five master’s students in the Music Technology and Computation Graduate Program’s inaugural cohort.

The program, directed by professor of the practice in music technology Eran Egozy ’93, MNG ’95, is a collaboration between MIT Music and Theater Arts in the School of Humanities, Arts, and Social Sciences, and the School of Engineering. It invites practitioners to study, discover, and develop new computational approaches to music. It also includes a speaker series that exposes students and the broader MIT community to music industry professionals, artists, technologists, and other researchers.

Rigopulos ’92, SM ’94, is a video game designer, musician, and former CEO of Harmonix Music Systems, a company he co-founded with Egozy in 1995. Harmonix is now a part of Epic Games, where Rigopulos is the director of game development for music.

“MIT is where I was first able to pursue my passion for music technology decades ago, and that experience was the springboard for a long and fulfilling career,” says Rigopulos. “So, when MIT launched an advanced degree program in music technology, I was thrilled to fund a fellowship to help propel this exciting new program.”

Egozy is enthusiastic about Salcedo’s work and his commitment to further exploring its possibilities. “He is a beautiful example of a multidisciplinary researcher who thinks deeply about how to best use technology to enhance and expand human creativity,” he says.

Salcedo has been selected to deliver the student address at the 2026 Advanced Degree Ceremony for the School of Humanities, Arts, and Social Sciences. “It’s an honor and it’s daunting,” he says. “It feels like a huge responsibility,” though one he’s eager to embrace. His selection also pleases Egozy. “I am super excited that Mariano was chosen to deliver this year’s keynote,” he enthuses.

Changing gears

Growing up in Mexico and Texas, Mariano Salcedo couldn’t readily indulge his passion for creating music. “There are no bands in Mexican public schools,” he says. While some families could pay for instruments and lessons, others like Salcedo’s were less fortunate.

“I’ve always loved music,” he continues. “I was a listener.”

Salcedo began his MIT journey as a mechanical engineering student, applying to MIT through the Questbridge program. “I heard if you like engineering and science that attending MIT would be a great choice,” he recalls. “Nerds are welcomed and embraced.” While he dutifully worked toward completing his MechE curriculum, music and technology came calling after a chance encounter with an LLM.

“I was introduced to an LLM chatbot and was blown away,” he recalls. “This was something that was speaking to me. I was both awed and frightened.” After his encounter with the chatbot, Salcedo switched his major from mechanical engineering to artificial intelligence and decision making.

“I basically started over after being two thirds of the way through the MechE curriculum,” he says. He learned about the possibilities available with AI but also confronted some of the challenges bedeviling researchers and developers including its potential power, ensuring its responsible use, human bias, limited access for people from underrepresented groups, and a lack of diversity among developers. He decided he might be able to change that picture.

“I thought one more person in the field could make a difference,” he says.

While completing his undergraduate studies, Salcedo’s love of music resurfaced. “I began DJ’ing at MIT and was hooked,” he says. While he hadn’t learned to play a traditional instrument, he discovered he could create engaging soundscapes with technology. “I bought a digital audio work station to help me make music,” he continues.

Egozy and Salcedo met in 2024 while Salcedo completed an Undergraduate Research Opportunities Program rotation as a game developer in Egozy’s lab. “He was incredibly curious and has grown tremendously over a very short time period,” Egozy says. Egozy became an informal, though important, mentor to Salcedo. “He brings great energy and thoughtfulness to his work, and to supporting others in the [music technology and computation graduate] program,” Egozy notes.

Salcedo also took a class with Egozy, 21M.385/21M.585/6.4450 (Interactive Music Systems), which further fed his appetite for the creativity he craved while also allowing him to indulge his fascination with music’s possibilities. By taking advantage of courses in the HASS curriculum, he further developed his understanding of music theory and related technologies.

“I took a class with professor Leslie Tilley, 21M.240 (Critically Thinking in Music), which helped establish a valuable framework for understanding music making,” he says, “while a class like 6.3000 (Signal Processing) helped me connect intuition with science.”

Working across disciplines

While Salcedo is passionate about his music and his research, he’s also invested in building relationships with his fellow students. He’s a member of the fraternity Sigma Nu, where he says he “found a home and community.” He also took a MISTI trip to Chile in summer 2023, where he conducted music technology research. Salcedo praises the culture of camaraderie at MIT and is grateful for its influence on his work as a scholar. “MIT has taught me how to learn,” he says.

Professors encouraged him to present his research and findings. He presented his work — Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music — at the Association for the Advancement of Artificial Intelligence conference in Singapore in January 2026.

Salcedo believes his research can potentially move beyond music visualization. “What if we could improve the ways we model self-organized systems?” he asks. “That is, systems like multicellular organisms, flocks of birds, or societies that interact locally but exhibit interesting behaviors.” Any system, Salcedo says, where the whole is more than the sum of its parts.

Developing the technology used to design his application can potentially help answer important ethical questions regarding AI’s continued expansion and growth. The path to his work’s development is both daunting and lonely, but those challenges feed his work ethic.

“It’s intimidating to pursue this path when the academy is currently focused on LLMs,” he says. “But it’s also important to explain and explore the base technology before digging into more nuanced work, which can help audiences understand it better.” Knowing that he has the support of his professors helps Salcedo maintain excitement for his ideas. “They only ask that we ground our interests in research,” he says.

His investigations are impacting his work as a musician. “My music has gotten more interesting because of the classes I’m taking,” he says. He’s also interested in understanding whose music the academy and the world hears, exploring biases toward Western music in the canon and exploring how to reduce biases related to which kinds of music are valued.

“The work we do as technologists is far less subjective than we’re led to believe,” he believes.

Salcedo is especially grateful for the support he’s received during his time at MIT. “Program faculty encourage a variety of pursuits,” he says, “and ask us to advance our individual aims rather than focusing on theirs.” During his time in the graduate program, he notes with enthusiasm how often he’s been challenged to pursue his ideas.

Ultimately, Salcedo wants people to experience the joy he feels working at the intersection of the humanities and the sciences. Music and technology impact nearly everyone. Inviting audiences into his laboratory as participants in the creative and research processes offers the same kind of satisfaction he gets from crafting a great beat or solving for a thorny technical challenge. Helping audiences understand his work’s value fuels his drive to succeed.

“I want users to feel movement and explore sounds and their impact more fully,” he says.

MIT engineers design proteins by their motion, not just their shape

Proteins are far more than nutrients we track on a food label. Present in every cell of our bodies, they work like nature’s molecular machines. They walk, stretch, bend, and flex to do their jobs, pumping blood, fighting disease, building tissue, and many other jobs too small for the eye to see. Their power doesn’t come from shape alone, but from how they move. 

In recent years, artificial intelligence has allowed scientists to design entirely new protein structures not found in nature tailored for specific functions, such as binding to viruses, or mimicking the mechanical properties of silk for sustainable materials. But designing for structure alone is like building a car body without any control over how the engine performs. The subtle vibrations, shifts, and mechanical dynamics of a protein are just as critical to its functions as its form.

Now, MIT engineers have taken a major step toward closing the gap with the development of an AI model known as VibeGen. If vibe coding lets programmers describe what they want and then AI generates the software, VibeGen does the same for living molecules: specify the vibe — the pattern of motion you want — and the model writes the protein. 

The new model allows scientists to target how a protein flexes, vibrates, and shifts between shapes in response to its environment, opening a new frontier in the design of molecular mechanics. VibeGen builds on a series of advances from the Buehler lab in agentic AI for science — systems in which multiple AI models collaborate autonomously to solve problems too complex for any single model.

“The essence of life at fundamental molecular levels lies not just in structure, but in movement,” says Markus Buehler, the Jerry McAfee Professor of Engineering in the departments of Civil and Environmental Engineering and Mechanical Engineering. “Everything from protein folding to the deformation of materials under stress follows the fundamental laws of physics.”

Buehler and his former postdoc, Bo Ni, identified a critical need for what they call physics-aware AI: systems capable of reasoning about motion, not just snapshots of molecular structure. “AI must go beyond analyzing static forms to understanding how structure and motion are fundamentally intertwined,” Buehler adds.

The new approach, described in a paper March 24 in the journal Matteruses generative AI to create proteins with tailor-made dynamics.

Training AI to think about motion 

The revolution in AI-driven protein science has been, overwhelmingly, a revolution in structure. Tools like AlphaFold solved the decades-old problem of predicting a protein’s three-dimensional shape. Existing generative models learned to design new shapes from scratch. But in focusing on the folded snapshot — the protein frozen in place — the field largely set aside the property that makes proteins work: their motion. “Structure prediction was such a grand challenge that it absorbed the field’s attention,” Buehler says. “But a protein’s shape is just one frame of a much longer film, and the design space extends through space and time, where structure sits on a much broader manifold.” Scientists could design a protein with a particular architecture. They couldn’t yet specify how that protein would move, flex, or vibrate once it was built.

VibeGen does something no protein design tool has done before. It inverts the traditional problem. Rather than asking, “What shape will this sequence produce?” it asks, “What sequence will make a protein move in exactly this way?”

To build VibeGen, Buehler and Ni turned to a class of AI diffusion models, the same underlying technology that powers AI image generators capable of creating realistic pictures from pure noise. In VibeGen’s case, the model starts with a random sequence of amino acids and refines it, step by step, until it converges on a sequence predicted to vibrate and flex in a targeted way.

The system works through two cooperating agents that design and challenge each other. A “designer” proposes candidate sequences aimed at a target motion profile. A “predictor” evaluates those candidates, asking whether they’ll actually move the way the designer intended. The two models iterate back and forth like an internal dialogue, until the design stabilizes into something that meets the goal. By specifying this vibrational fingerprint as the design input, VibeGen inverts the usual logic: dynamics becomes the blueprint, and structure follows.

“It’s a collaborative system,” Ni says. “The designer proposes, the predictor critiques, and the design improves through that tension.”

Most sequences VibeGen produces are entirely de novo, not borrowed from nature, not a variation on something evolution already made. To confirm the designs actually work, the team ran detailed physics-based molecular simulations, and the proteins behaved exactly as intended, flexing and vibrating in the patterns VibeGen had targeted.

One of the study’s most striking findings is that many different protein sequences and folds can satisfy the same vibrational target — a property the researchers call functional degeneracy. Where evolution converged on one solution, VibeGen reveals an entire family of alternatives: proteins with different structures and sequences that nonetheless move in the same way. “It suggests that nature explored only a fraction of what’s possible,” Buehler says. “For any given dynamic behavior, there may be a large, untapped space of viable designs.”

A new frontier in molecular engineering

Controlling protein dynamics could have wide-ranging applications. In medicine, proteins that can change shape on cue hold enormous potential. Many therapeutic proteins work by binding to a target molecule — a virus, a cancer cell, a misfiring receptor. How well they bind often depends not just on their shape, but on how flexibly they can adapt to their target. A protein that is engineered with motion could grip more precisely, reduce unintended interactions, and ultimately become a safer, more effective drug.

In materials science, which is an area of Buehler’s research, mechanical properties at the molecular scale affect their performance. Biological materials like silk and collagen get their strength and resilience from the coordinated motion of their molecular building blocks. Designing proteins that are stiffer, flexible, or vibrate in a certain way could lead to new sustainable fibers, impact-resistant materials, or biodegradable alternatives to petroleum-based plastics.

Buehler envisions further possibilities: structural materials for buildings or vehicles incorporating protein-based components that heal themselves after mechanical stress, or that adjust in response to heavy load.

By enabling researchers to specify motion as a direct design parameter, VibeGen treats proteins less like static shapes and more like programmable mechanical devices. The advance bridges artificial intelligence, medicine, synthetic biology, and materials engineering — toward a future in which molecular machines can be designed with the same precision and intentionality as bridges, engines, or microchips.

VibeGen can venture into uncharted territory, proposing protein designs beyond the repertoire of evolution, tailored purely to our specifications. It’s as if we’ve invented a new creative engine that designs molecular machines on demand,” Buehler adds.

The researchers plan to refine the model further and validate their designs in the lab. They also hope to integrate motion-aware design with other AI tools, building toward systems that can design proteins to be not just dynamic, but multifunctional; machines that sense their environment, respond to signals, and adapt in real-time.

The word “vibe” comes from vibration, and Buehler sees the connection as more than wordplay. “We’ve turned ‘vibe’ into a metaphor, a feeling, something subjective,” he says. “But for a protein, the vibe is the physics. It is the actual pattern of motion that determines what the molecule can do, the very machinery of life.”

The research was supported by the U.S. Department of Agriculture, the MIT-IBM Watson AI Lab, and MIT’s Generative AI Initiative.