Precision Talent

Loading

Blog

“This is science!” – MIT president talks about the importance of America’s research enterprise on GBH’s Boston Public Radio

In a wide-ranging live conversation, MIT President Sally Kornbluth joined Jim Braude and Margery Eagan live in studio for GBH’s Boston Public Radio on Thursday, February 5. They talked about MIT, the pressures facing America’s research enterprise, the importance of science, that Congressional hearing on antisemitism in 2023, and more – including Sally’s experience as a Type 1 diabetic.

Reflecting on how research and innovation in the treatment of diabetes has advanced over decades of work, leading to markedly better patient care, Kornbluth exclaims: “This is science!”

With new financial pressures facing universities, increased competition for talented students and scholars from outside the U.S., as well as unprecedented pressures on university leaders and campuses, co-host Eagan asks Kornbluth what she thinks will happen in years to come.

“For us, one of the hardest things now is the endowment tax,” remarks Kornbluth. “That is $240 million a year. Think about how much science you can get for $240 million a year. Are we managing it? Yes. Are we still forging ahead on all of our exciting initiatives? Yes. But we’ve had to reconfigure things. We’ve had to merge things. And it’s not the way we should be spending our time and money.”   

Watch and listen to the full episode on YouTube. President Kornbluth appears one hour and seven minutes into the broadcast.

Following Kornbluth’s appearance, MIT Assistant Professor John Urschel – also a former offensive lineman for the Baltimore Ravens –   joined Edgar B. Herwick III, host of GBH’s newest show, The Curiosity Desk, to talk about his love of his family, linear algebra, and football.

On how he eventually chose math over football, Urschel quips: “Well, I hate to break it to you, I like math better… let me tell you, when I started my PhD at MIT, I just fell in love with the place. I fell in love with this idea of being in this environment [where] everyone loves math, everyone wants to learn. I was just constantly excited every day showing up.”

Prof. Urschel appears about 2 hours and 40 minutes into the webcast on YouTube.

Coming up on Curiosity Desk later this month…

Airing weekday afternoons from 1-2 p.m., The Curiosity Desk will welcome additional MIT guests in the coming weeks. On Thursday, Feb. 12, Professors Sangeeta Bhatia and Angela Belcher talk with Herwick about their research to improve diagnostics for ovarian cancer. We learn that about 80% of the time ovarian cancer starts in the fallopian tubes and how this points the way to a whole new approach to diagnosing and treating the disease. 

Then, on Tuesday, Feb. 17 Anette “Peko” Hosoi, Pappalardo Professor of Mechanical Engineering, and Jerry Lu MFin ’24, a former researcher at the MIT Sports Lab, visit The Curiosity Desk to discuss their work using AI to help Olympic figure skaters improve their jumps.

Helping AI agents search to get the best results out of large language models

Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes. 

To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.” 

With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.

Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.

“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.” 

EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.

Branching out

When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines. 

You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.

Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.

The coding efficiency of EnCompass

So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.

For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.

“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”

The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”

Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.

“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”  

Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.

The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.

Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing

Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.

Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy. Hedden is also a PI in the Laboratory for Information and Decision Systems (LIDS).

“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.

Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.

Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.

The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.

Antonio Torralba, three MIT alumni named 2025 ACM fellows

Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.

A principal investigator within the Computer Science and Artificial Intelligence Laboratory, Torralba received his BS in telecommunications engineering from the Universitat Politècnica de Catalunya, in Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, in France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab. 

Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field. 

Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya. 

ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.

3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs

In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.

James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.

In this Q&A, Collins speaks about his latest work and goals for this research.

Q.  You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research? 

A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.

At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.

The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.

Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?

A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.

Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.

Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?

A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.

Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.

Katie Spivakovsky wins 2026 Churchill Scholarship

MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.

Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.

At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.

On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.

“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.

The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.

MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.

Counter intelligence

How can artificial intelligence step out of a screen and become something we can physically touch and interact with?

That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.

“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”

Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.

“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”

“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.

“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.

Generative cuisine

The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.

“There were lots of small things that AI wasn’t great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”

They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.

“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.

Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.

“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.

Retro and red

After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.

A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.

While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.

Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”

Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.

“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.

Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.

“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.” 

SMART launches new Wearable Imaging for Transforming Elderly Care research group

What if ultrasound imaging is no longer confined to hospitals? Patients with chronic conditions, such as hypertension and heart failure, could be monitored continuously in real-time at home or on the move, giving health care practitioners ongoing clinical insights instead of the occasional snapshots — a scan here and a check-up there. This shift from reactive, hospital-based care to preventative, community and home-based care could enable earlier detection and timely intervention, and truly personalized care.

Bringing this vision to reality, the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has launched a new collaborative research project: Wearable Imaging for Transforming Elderly Care (WITEC). 

WITEC marks a pioneering effort in wearable technology, medical imaging, research, and materials science. It will be dedicated to foundational research and development of the world’s first wearable ultrasound imaging system capable of 48-hour intermittent cardiovascular imaging for continuous and real-time monitoring and diagnosis of chronic conditions such as hypertension and heart failure. 

This multi-million dollar, multi-year research program, supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence and Technological Enterprise program, brings together top researchers and expertise from MIT, Nanyang Technological University (NTU Singapore), and the National University of Singapore (NUS). Tan Tock Seng Hospital (TTSH) is WITEC’s clinical collaborator and will conduct patient trials to validate long-term heart imaging for chronic cardiovascular disease management.

“Addressing society’s most pressing challenges requires innovative, interdisciplinary thinking. Building on SMART’s long legacy in Singapore as a hub for research and innovation, WITEC will harness interdisciplinary expertise — from MIT and leading institutions in Singapore — to advance transformative research that creates real-world impact and benefits Singapore, the U.S., and societies all over. This is the kind of collaborative research that not only pushes the boundaries of knowledge, but also redefines what is possible for the future of health care,” says Bruce Tidor, chief executive officer and interim director of SMART, who is also an MIT professor of biological engineering and electrical engineering and computer science.

Industry-leading precision equipment and capabilities

To support this work, WITEC’s laboratory is equipped with advanced tools, including Southeast Asia’s first sub-micrometer 3D printer and the latest Verasonics Vantage NXT 256 ultrasonic imaging system, which is the first unit of its kind in Singapore.

Unlike conventional 3D printers that operate at millimeter or micrometer scales, WITEC’s 3D printer can achieve sub‑micrometer resolution, allowing components to be fabricated at the level of single cells or tissue structures. With this capability, WITEC researchers can prototype bioadhesive materials and device interfaces with unprecedented accuracy — essential to ensuring skin‑safe adhesion and stable, long‑term imaging quality.

Complementing this is the latest Verasonics ultrasonic imaging system. Equipped with a new transducer adapter and supporting a significantly larger number of probe control channels than existing systems, it gives researchers the freedom to test highly customized imaging methods. This allows more complex beamforming, higher‑resolution image capture, and integration with AI‑based diagnostic models — opening the door to long‑duration, real‑time cardiovascular imaging not possible with standard hospital equipment.

Together, these technologies allow WITEC to accelerate the design, prototyping, and testing of its wearable ultrasound imaging system, and to demonstrate imaging quality on phantoms and healthy subjects.

Transforming chronic disease care through wearable innovation 

Chronic diseases are rising rapidly in Singapore and globally, especially among the aging population and individuals with multiple long-term conditions. This trend highlights the urgent need for effective home-based care and easy-to-use monitoring tools that go beyond basic wellness tracking.

Current consumer wearables, such as smartwatches and fitness bands, offer limited physiological data like heart rate or step count. While useful for general health, they lack the depth needed to support chronic disease management. Traditional ultrasound systems, although clinically powerful, are bulky, operator-dependent, can only be deployed episodically within the hospitals, and are limited to snapshots in time, making them unsuitable for long-term, everyday use.

WITEC aims to bridge this gap with its wearable ultrasound imaging system that uses bioadhesive technology to enable up to 48 hours of uninterrupted imaging. Combined with AI-enhanced diagnostics, the innovation is aimed at supporting early detection, home-based pre-diagnosis, and continuous monitoring of chronic diseases.

Beyond improving patient outcomes, this innovation could help ease labor shortages by freeing up ultrasound operators, nurses, and doctors to focus on more complex care, while reducing demand for hospital beds and resources. By shifting monitoring to homes and communities, WITEC’s technology will enable patient self-management and timely intervention, potentially lowering health-care costs and alleviating the increasing financial and manpower pressures of an aging population.

Driving innovation through interdisciplinary collaboration

WITEC is led by the following co-lead principal investigators: Xuanhe Zhao, professor of mechanical engineering and professor of civil and environmental engineering at MIT; Joseph Sung, senior vice president of health and life sciences at NTU Singapore and dean of the Lee Kong Chian School of Medicine (LKCMedicine); Cher Heng Tan, assistant dean of clinical research at LKCMedicine; Chwee Teck Lim, NUS Society Professor of Biomedical Engineering at NUS and director of the Institute for Health Innovation and Technology at NUS; and Xiaodong Chen, distinguished university professor at the School of Materials Science and Engineering within NTU. 

“We’re extremely proud to bring together an exceptional team of researchers from Singapore and the U.S. to pioneer core technologies that will make wearable ultrasound imaging a reality. This endeavor combines deep expertise in materials science, data science, AI diagnostics, biomedical engineering, and clinical medicine. Our phased approach will accelerate translation into a fully wearable platform that reshapes how chronic diseases are monitored, diagnosed and managed,” says Zhao, who serves as a co-lead PI of WITEC.

Research roadmap with broad impact across health care, science, industry, and economy

Bringing together leading experts across interdisciplinary fields, WITEC will advance foundational work in soft materials, transducers, microelectronics, data science and AI diagnostics, clinical medicine, and biomedical engineering. As a deep-tech R&D group, its breakthroughs will have the potential to drive innovation in health-care technology and manufacturing, diagnostics, wearable ultrasonic imaging, metamaterials, diagnostics, and AI-powered health analytics. WITEC’s work is also expected to accelerate growth in high-value jobs across research, engineering, clinical validation, and health-care services, and attract strategic investments that foster biomedical innovation and industry partnerships in Singapore, the United States, and beyond.

“Chronic diseases present significant challenges for patients, families, and health-care systems, and with aging populations such as Singapore, those challenges will only grow without new solutions. Our research into a wearable ultrasound imaging system aims to transform daily care for those living with cardiovascular and other chronic conditions — providing clinicians with richer, continuous insights to guide treatment, while giving patients greater confidence and control over their own health. WITEC’s pioneering work marks an important step toward shifting care from episodic, hospital-based interventions to more proactive, everyday management in the community,” says Sung, who serves as co‑lead PI of WITEC.

Led by Violet Hoon, senior consultant at TTSH, clinical trials are expected to commence this year to validate long-term heart monitoring in the management of chronic cardiovascular disease. Over the next three years, WITEC aims to develop a fully integrated platform capable of 48-hour intermittent imaging through innovations in bioadhesive couplants, nanostructured metamaterials, and ultrasonic transducers.

As MIT’s research enterprise in Singapore, SMART is committed to advancing breakthrough technologies that address pressing global challenges. WITEC adds to SMART’s existing research endeavors that foster a rich exchange of ideas through collaboration with leading researchers and academics from the United States, Singapore, and around the world in key areas such as antimicrobial resistance, cell therapy development, precision agriculture, AI, and 3D-sensing technologies.

How generative AI can help scientists synthesize complex materials

Generative artificial intelligence models have been used to create enormous libraries of theoretical materials that could help solve all kinds of problems. Now, scientists just have to figure out how to make them.

In many cases, materials synthesis is not as simple as following a recipe in the kitchen. Factors like the temperature and length of processing can yield huge changes in a material’s properties that make or break its performance. That has limited researchers’ ability to test millions of promising model-generated materials.

Now, MIT researchers have created an AI model that guides scientists through the process of making materials by suggesting promising synthesis routes. In a new paper, they showed the model delivers state-of-the-art accuracy in predicting effective synthesis pathways for a class of materials called zeolites, which could be used to improve catalysis, absorption, and ion exchange processes. Following its suggestions, the team synthesized a new zeolite material that showed improved thermal stability.

The researchers believe their new model could break the biggest bottleneck in the materials discovery process.

“To use an analogy, we know what kind of cake we want to make, but right now we don’t know how to bake the cake,” says lead author Elton Pan, a PhD candidate in MIT’s Department of Materials Science and Engineering (DMSE). “Materials synthesis is currently done through domain expertise and trial and error.”

The paper describing the work appears today in Nature Computational Science. Joining Pan on the paper are Soonhyoung Kwon ’20, PhD ’24; DMSE postdoc Sulin Liu; chemical engineering PhD student Mingrou Xie; DMSE postdoc Alexander J. Hoffman; Research Assistant Yifei Duan SM ’25; DMSE visiting student Thorben Prein; DMSE PhD candidate Killian Sheriff; MIT Robert T. Haslam Professor in Chemical Engineering Yuriy Roman-Leshkov; Valencia Polytechnic University Professor Manuel Moliner; MIT Paul M. Cook Career Development Professor Rafael Gómez-Bombarelli; and MIT Jerry McAfee Professor in Engineering Elsa Olivetti.

Learning to bake

Massive investments in generative AI have led companies like Google and Meta to create huge databases filled with material recipes that, at least theoretically, have properties like high thermal stability and selective absorption of gases. But making those materials can require weeks or months of careful experiments that test specific reaction temperatures, times, precursor ratios, and other factors.

“People rely on their chemical intuition to guide the process,” Pan says. “Humans are linear. If there are five parameters, we might keep four of them constant and vary one of them linearly. But machines are much better at reasoning in a high-dimensional space.”

The synthesis process of materials discovery now often takes the most time in a material’s journey from hypothesis to use.

To help scientists navigate that process, the MIT researchers trained a generative AI model on over 23,000 material synthesis recipes described over 50 years of scientific papers. The researchers iteratively added random “noise” to the recipes during training, and the model learned to de-noise and sample from the random noise to find promising synthesis routes.

The result is DiffSyn, which uses an approach in AI known as diffusion.

“Diffusion models are basically a generative AI model like ChatGPT, but more like the DALL-E image generation model,” Pan says. “During inference, it converts noise into meaningful structure by subtracting a little bit of noise at each step. In this case, the ‘structure’ is the synthesis route for a desired material.”

When a scientist using DiffSyn enters a desired material structure, the model offers some promising combinations of reaction temperatures, reaction times, precursor ratios, and more.

“It basically tells you how to bake your cake,” Pan says. “You have a cake in mind, you feed it into the model, the model spits out the synthesis recipes. The scientist can pick whichever synthesis path they want, and there are simple ways to quantify the most promising synthesis path from what we provide, which we show in our paper.”

To test their system, the researchers used DiffSyn to suggest novel synthesis paths for a zeolite, a material class that is complex and takes time to form into a testable material.

“Zeolites have a very high-dimensional synthesis space,” Pan says. “Zeolites also tend to take days or weeks to crystallize, so the impact [of finding the best synthesis pathway faster] is much higher than other materials that crystallize in hours.”

The researchers were able to make the new zeolite material using synthesis pathways suggested by DiffSyn. Subsequent testing revealed the material had a promising morphology for catalytic applications.

“Scientists have been trying out different synthesis recipes one by one,” Pan says. “That makes them very time-consuming. This model can sample 1,000 of them in under a minute. It gives you a very good initial guess on synthesis recipes for completely new materials.”

Accounting for complexity

Previously, researchers have built machine-learning models that mapped a material to a single recipe. Those approaches do not take into account that there are different ways to make the same material.

DiffSyn is trained to map material structures to many different possible synthesis paths. Pan says that is better aligned with experimental reality.

“This is a paradigm shift away from one-to-one mapping between structure and synthesis to one-to-many mapping,” Pan says. “That’s a big reason why we achieved strong gains on the benchmarks.”

Moving forward, the researchers believe the approach should work to train other models that guide the synthesis of materials outside of zeolites, including metal-organic frameworks, inorganic solids, and other materials that have more than one possible synthesis pathway.

“This approach could be extended to other materials,” Pan says. “Now, the bottleneck is finding high-quality data for different material classes. But zeolites are complicated, so I can imagine they are close to the upper-bound of difficulty. Eventually, the goal would be interfacing these intelligent systems with autonomous real-world experiments, and agentic reasoning on experimental feedback to dramatically accelerate the process of materials design.”

The work was supported by MIT International Science and Technology Initiatives (MISTI), the National Science Foundation, Generalitat Vaslenciana, the Office of Naval Research, ExxonMobil, and the Agency for Science, Technology and Research in Singapore.

The philosophical puzzle of rational artificial intelligence

To what extent can an artificial system be rational?

A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.

This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one’s goals.

“You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.

Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”

Tools for further theoretical thinking

Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).

While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.

Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.

“It’s important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”

Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.

Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”

The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.

Blending disciplines and questioning assumptions

So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.

Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.

On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”

Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”

The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.

For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”

According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.

He adds, “philosophy also has a way of surprising you.”