AI Solutions

Loading

Human-machine teaming dives underwater

The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?

Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.

“Divers and AUVs generally don’t team at all underwater,” says principal investigator Madeline Miller. “Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can’t do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren’t agile enough.”

Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can’t perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.

As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.

“Ultimately, we want to devise solutions for navigation and perception in expeditionary environments,” Miller says. “For the missions we’re thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example.”

On the navigation side, Miller’s team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard’s group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller’s team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.

“We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents,” Miller explains. “With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly.”

On the perception side, Miller’s team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.

“The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, “I think this is a tire, but I’m not sure. What do you think?” Then, the diver can respond, “Yes, you’ve got it right, or no, look over here in the image to improve your classification,” Miller says.

This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they’re using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.

Miller’s team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire’s (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.

“The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together.”

Last summer, the team started testing equipment with human divers at Michigan Technological University’s Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team’s tube-shaped prototype tablet, dubbed a “tube-let.” The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.

“A challenge during testing was coordinating the motion of the diver and vehicle, because they don’t yet collaborate,” Miller says. “Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don’t collide.”

The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory’s Advanced Undersea Systems and Technology Group and Leonard’s research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn’t exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.

With the internally funded research program coming to an end, Miller’s team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.

“The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities,” Miller says.

Bringing AI-driven protein-design tools to biologists everywhere

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”

Jacob Andreas and Brett McGuire named Edgerton Award winners

MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven’t felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”

Q&A: MIT SHASS and the future of education in the age of AI

The MIT School of Humanities, Arts, and Social Sciences (SHASS) was founded in 1950 in response to “a new era emerging from social upheaval and the disasters of war,” as outlined in the 1949 Lewis Committee Report

The report’s findings emphasized MIT’s role and responsibility in the new nuclear age, which called for doubling down on genuine “integration” of scientific and technical topics with humanistic scholarship and teaching. Only that way, the committee wrote, could MIT tackle “the most difficult and complicated problems confronting our generation.”

As SHASS marks its 75th anniversary, Dean Agustín Rayo answers questions about why the need for developing students with broad minds and human understanding is as urgent as ever, given pressing challenges in the midst of a new technological revolution.

Q: Many universities are responding to artificial intelligence by launching new technical programs or updating curricula. You’ve suggested the change is deeper than that. Why?

A: Artificial intelligence isn’t just changing the way students learn — it’s transforming every aspect of society. The labor market is experiencing a dramatic shift, upending traditional paths to financial stability. And AI is changing the ways we bring meaning to our lives: the ways we build relationships, the ways we pay attention, and the things we enjoy doing.

The upshot is that the most important question universities need to ask is not how to adapt our pedagogy to AI — although we certainly need to address that. The most important question we need to ask is how to provide an education that brings real value to students in the age of AI. 

We need to ensure that universities provide students with the tools they need to find a path to financial security and to build meaningful lives.

We need to produce students with minds that are both nimble and broad. We need our students to not only be able to execute tasks effectively, but also have the judgment to determine which tasks are worth executing. We need students who have a moral compass, and who understand how the world works, in all of its political, economic, and human complexity. We need students who know how to think critically, and who have excellent communication and leadership skills.

Q: What role do the humanities, arts, and social sciences play in preparing MIT students for that future?

A: They’re essential, and are rightly a core part of an MIT education: MIT has long required its undergraduates take at least eight courses in HASS disciplines to graduate.

Fields like philosophy, political science, economics, literature, history, music, and anthropology are crucial to developing the parts of our lives that are essentially human — the parts that will not be replaced by AI.

They are crucial to developing critical thinking and a moral compass. They are crucial to understanding people — our values, institutions, cultures, and ways of thinking. They are crucial to creating students who are broad thinkers who understand the way the world works. They are crucial to developing students who are excellent communicators and are able to describe their projects — and their lives — in a way that endows them with meaning.

Our students understand this. Here is how one of them put the point: “Engineering gives me the tools to measure the world; the humanities teach me how to interpret it. That balance has shaped both how I do science and why I do it.” (Full interview here.)

Q: Some people worry that emphasizing humanistic study could dilute MIT’s technological edge. How do you respond to that concern?

A: I think the opposite is true. 

MIT is an important engine for social mobility in the United States, and a catalyst for entrepreneurship, which has added billions of dollars to the American economy. That cannot be separated from the fact that we are a technical institution, which brings together the country’s most talented undergraduates — regardless of socioeconomic background — and transforms them into the next generation of our country’s top scientific and engineering leaders. 

MIT plays an incredibly important role in our country. So, the last thing I want to do is mess with our secret sauce.

But I also think that the age of AI is forcing us to rethink what it means to be a top engineer. 

Think about artificial intelligence itself. The challenges we face are not just technical. Issues like bias, accountability, governance, and the societal impact of automation are no less important. Understanding those dimensions helps technologists design better systems and anticipate real-world consequences.

Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world.

Q: What kinds of changes is MIT SHASS pursuing to support this vision?

A: There’s a lot going on! 

We’ve launched the MIT Human Insight Collaborative (MITHIC) as a way of strengthening research in the humanities, arts, and social sciences, and of deepening collaboration with colleagues across MIT.

We’re shaping the undergraduate experience to ensure that every MIT student engages with the big societal questions shaping our time, from democratic resilience to climate change to the ethics of new technologies.

We’re building stronger connections through initiatives like the creation of shared faculty positions with the MIT Schwarzman College of Computing (SCC). And we recently launched a new Music Technology and Computation Graduate Program with the School of Engineering.

We’re partnering with SERC (the SCC’s Social and Ethical Responsibilities of Computing) to design new classes on the intersection of computing and human-centered issues, such as ethics.

And we’re elevating the humanities — for their own sake, and as a space for experimentation, bringing together students, faculty, and partners to explore new forms of research, teaching, and public engagement.

This is a very exciting time for SHASS.

Codex for (almost) everything

The updated Codex app for macOS and Windows adds computer use, in-app browsing, image generation, memory, and plugins to accelerate developer workflows.

How access models are shaping AI cybersecurity deployment

How access models are shaping  AI cybersecurity deployment

What happens when advanced AI capabilities enter the cybersecurity stack at scale?

💡
Recent developments from OpenAI and Anthropic highlight a meaningful shift in how AI-powered security tools reach practitioners. The focus has moved beyond raw model performance and into a more operational question:

How is access to these systems structured, verified, and deployed?

For AI professionals, this marks an important moment. Cybersecurity AI now sits at the intersection of infrastructure, governance, and real-world application.

In other words, it has moved from interesting to essential.

So what does this mean for AI professionals?


The rise of AI-native cybersecurity tools

AI-driven cybersecurity continues to evolve from passive detection into active analysis and response. Models such as GPT-5.4-Cyber introduce capabilities that extend far beyond traditional tooling.

Security teams now have access to systems that can interpret compiled binaries, identify anomalies, and surface vulnerabilities without requiring source code.

This represents a meaningful acceleration in workflows that previously required manual reverse engineering and deep domain expertise.

The result is a shift toward AI-augmented security operations, where analysts operate alongside models that continuously evaluate and interpret complex systems. The coffee consumption may stay the same, yet the output per analyst looks very different…

Why AI safety breaks at the system level
AI safety shifts from the model to the system level. As AI becomes agentic and tool-driven, risk emerges from complex interactions, widening the gap between evaluation and real-world behavior.
How access models are shaping  AI cybersecurity deployment

Two emerging approaches to access

As these capabilities mature, different deployment strategies are taking shape. The contrast reflects a broader design decision within AI cybersecurity.

Some platforms emphasize controlled distribution, where access is limited to a small group of verified organizations. This approach prioritizes tight oversight and curated usage environments.

Others adopt a broader access model, where entry is granted through identity verification and structured onboarding. This approach focuses on enabling a wider pool of security professionals to leverage advanced tools.

💡
Both strategies reflect valid priorities. Each introduces distinct considerations for scalability, collaboration, and operational readiness.

What this means for AI professionals

For practitioners, access models now play a central role in how cybersecurity systems are integrated into existing workflows. The conversation has expanded from capability evaluation into deployment strategy.

Security leaders and AI engineers increasingly evaluate questions such as:

• How AI tools integrate into existing security pipelines and SIEM platforms• How identity verification frameworks support controlled access at scale

• How model outputs align with internal validation and audit processes

• How teams manage collaboration between human analysts and AI systems

These considerations highlight a broader trend. AI cybersecurity requires alignment across engineering, security, and governance functions. Silos rarely perform well under pressure, and as we all know, cybersecurity provides plenty of pressure.

3 easy ways to get the most out of Claude code
Everyone is talking about Claude Code. With millions of weekly downloads and a rapidly expanding feature set, it has quietly become one of the most powerful tools in a developer’s arsenal. But most people are barely scratching the surface.
How access models are shaping  AI cybersecurity deployment

The operational impact on security teams

AI-powered cybersecurity tools introduce measurable improvements in speed and coverage. At the same time, they reshape how teams approach daily operations.

Routine analysis tasks can be automated or augmented, allowing analysts to focus on higher-value investigations. Pattern recognition and anomaly detection benefit from continuous model evaluation, providing earlier visibility into potential threats.

At the same time, teams gain the ability to inspect complex systems with greater depth. Reverse engineering, malware classification, and vulnerability detection become more accessible across a wider range of skill levels.

This evolution supports a more distributed model of expertise, where advanced capabilities extend across the organization rather than remaining concentrated in specialized roles. More eyes on the problem, fewer bottlenecks in the process.


Key considerations for implementation

As organizations adopt AI-driven cybersecurity tools, several practical considerations come into focus:

• Integration: Alignment with existing infrastructure, including cloud environments and security platforms

• Validation: Processes for verifying model outputs and ensuring reliability in high-stakes scenarios

• Access control: Mechanisms for managing user permissions and maintaining secure usage

• Monitoring: Continuous oversight of model behavior and system performance

These factors shape how effectively AI systems contribute to security outcomes. Strong implementation frameworks support both performance and trust.


Building trust in AI-driven security systems

Trust remains a central component of AI adoption in cybersecurity. Teams rely on systems that operate consistently, transparently, and with measurable accuracy.

Clear audit trails, reproducible outputs, and well-defined evaluation metrics contribute to confidence in AI-generated insights. Structured access models further support trust by ensuring that usage aligns with organizational policies and standards.

As AI systems take on more responsibility within security workflows, trust becomes an operational requirement rather than a conceptual goal.

AI’s new era: Train once, infer forever in production AI
Why the future of AI systems will be driven by inference and agent workloads.
How access models are shaping  AI cybersecurity deployment

Looking ahead: Access as a design decision

AI cybersecurity continues to evolve rapidly, with new models and capabilities entering the landscape at a steady pace. Alongside this growth, access models have emerged as a defining factor in how these systems are used.

For AI professionals, this represents a shift in focus. Technical capability remains essential, while deployment strategy now carries equal weight. Decisions around access, verification, and integration shape how effectively AI contributes to security outcomes.

The next phase of AI cybersecurity development will likely bring further innovation in both capability and delivery. Teams that approach access as a core design decision will be well-positioned to adapt and scale.

Innovation in AI cybersecurity continues to accelerate. With the right access models in place, organizations can translate advanced capabilities into practical, high-impact security outcomes.

And ideally, sleep a little better at night…

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?

Welcome back to the Abstract! These are the studies this week that peacefully passed the crown, predicted trouble on the horizon, gave life after death, and coastally shelved an idea.

First, scientists watch a succession story play out for years in a naked mole rat colony. Then: prediction markets as a public health threat, the thorny questions of posthumous reproduction, and a walk on the shores of an ancient alien seas.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files.

Digging into the palace intrigue of a rodent realm 

Abeywardena, Shanes C., M. Schraibman, Alexandria et al. “Peaceful queen succession in the naked mole rat.” Science Advances.

Murderous queens. Bloody power struggles. Strictly enforced hierarchies. I’m speaking, of course, of naked mole rats, a bizarre species of rodent that becomes embroiled in violent conflicts over the succession of one breeding queen to the next. 

Though aggression in succession is the norm for these animals, scientists now report a rare peaceful transition of power from one queen to her daughter in a captive colony. 

The discovery suggests that “the less common peaceful trajectory to queen succession…is possible under some conditions” especially when “aggression-based enforcement may be insufficient or unnecessary and when the cost of a ‘war’ may be too high,” according to the new study.

As we’ve covered before on the Abstract, mole rats (both the naked kind and the non-naked kind) are the only mammals to live in eusocial colonies similar to bees or ants, meaning they are reigned over by one breeding queen and her subordinate workers. In addition to this unique social structure, mole rats display a number of fascinating behavioral and genetic adaptations, including long lifespans and low rates of cancer, which has made them a popular species for research.

Naked mole rats may not look all that intimidating, but when it’s time to anoint a new queen, the fur starts to fly (or it would, if these animals had any fur). If a queen dies or is deposed by rivals, subordinate females in the colony battle to take the throne.

But scientists co-led by Shanes Abeywardena and Alexandria M. Schraibman of the Salk Institute for Biological Studies observed a different succession story that unfolded over many years in the Amigos captive colony housed in San Diego. 

Starting in 2019, a queen named Teré reigned over the colony and produced many healthy pups. Once the colony became crowded, with nearly 40 members, Queen Teré began delivering litters with no surviving pups. When the researchers removed half of the members, she began to produce surviving pups again, though not many. The team then deliberately introduced another stressor by moving the colony to a new facility in 2022, which ceased Queen Teré’s fertility.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?
Summary of the Amigos colony’s succession story. Image: Abeywardena, Shanes C., M. Schraibman, Alexandria et al.

In response, Alexandria, one of Teré’s daughters, became pregnant in 2023 and 2024, but her litters also produced no survivors, and she had to be euthanized in 2024 due to a uterine torsion. Finally, the long reproductive hiatus was ended after three years by the ascension of Alexandria’s sister, Arwen, who became Queen Arwen upon her delivery of healthy pups in October 2025.

“Aside from a single incident on 6 February 2025 in which one animal was found with a superficial bite wound and dried blood around the face, an injury that resolved without recurrence, no aggression or dominance related conflict was observed,” the researchers said. “Instead, Queen Teré was reported to exhibit ‘guarding’ behavior of Arwen and her litter. No other signs of social instability, behavioral escalation, or colony-wide distress were documented.”

“Together, these observations indicate that following the decline of Queen Teré’s reproductive capacity and the loss of the intermediary breeder Alexandria, Arwen successfully assumed the reproductive role without eliciting aggression from the reigning queen or from other colony members,” the team concluded.

The study is an antidote to the story we covered last week about a lethal chimp “civil war,” demonstrating that animals with strict dominance structures choose peace over violence in some cases. My only note is that Teré’ be given the honorific Queen Mother for her service.

In other news…

The over/under on predication markets

Packin, Nizan Geslevich and Rabinovitz, Sharon. “Prediction markets as a public health threat.” Science.

Prediction markets (PMs) are exploding in popularity, but researchers warn that the “addictive design, vulnerable users, and permissive regulatory environments” that characterize these markets “are a well-established formula for population-level harm,” according to the Policy Forum section of the journal Science

PMs operated by companies like Kalshi or Polymarket “pose underappreciated threats to democratic integrity” and are linked to “addictive behaviors,” according to authors Nizan Geslevich Packin of Baruch College Zicklin School of Business and Sharon Rabinovitz of the University of Haifa. For instance, PMs can enable insider trading about classified government information and expose millions of users to the risk of addiction and major financial losses.

“A public health approach reframes PM risks as predictable outcomes of environmental design, analogous to tobacco control’s success in treating smoking as population-level exposure rather than individual vice,” the team argued in the article. 

“The window for precautionary action is closing,” the researchers emphasized. “Each week of billion-dollar PM activity…prolongs a large uncontrolled experiment on users.”

It remains to be seen whether this warning about the dangers of a wild new industry will materialize into meaningful regulatory action. Want to make a bet?

Creating new life after death

Bamford, Sandra Carol. “Spectral Connections: Anthropological Engagements with Posthumous Reproduction.” Cambridge Archaeological Journal.

Posthumous children—children born after the death of one or both parents—are popular in myth and fiction, from the Greek Dionysus to more modern characters like John Connor or Daenerys Targaryen. 

But this is also a real demographic of people that may evolve in interesting ways as reproductive technologies enable larger numbers of posthumous conceptions—in which the sperm and egg donors for an embryo may be deceased, such as the case of a boy born in 2018 whose mother and father had both died years earlier in a car crash.

In this way, “frozen sperm, eggs (or embryos) are, at one and the same time, both alive and dead,” said Sandra Bamford of the University of Toronto in a new anthropological study of the topic. “Through their frozen gametes and the potential of new kin connections in the future, the dead remain as active participants influencing the lives of the living.”

The study, which is part of a broader journal issue exploring kinship, pulls together many intriguing case studies, including the “Nuer ghost marriage” practices of Sudan, in which a deceased man can be considered the father of a kinsman’s children, or the case of William Kane, who bequeathed frozen sperm to his girlfriend, sparking a legal battle with his adult children after his death by suicide. 

In other words, the legal, ethical, and practical implications of posthumous conception are still very much in flux, raising thorny questions about when, and how, the dead can produce new life. For instance: the ambiguities over judging the consent of a deceased person over the use of their posthumous gametes; the rights of posthumously conceived children to be named heirs of estates; and the possible emotional and psychological toll on posthumously conceived children, along with their family members.  

The Rime of the Really Ancient Mariner 

Zaki, Abdallah S. and Lamb, Michael P.  “Identifying the topographic signature of early Martian oceans.” Nature. 

We’ll close, as all things should, with waves lapping on long-lost alien shores. The surface of Mars is etched with the memory of rivers, lakes, and perhaps even an expansive ocean that may have covered much of its northern hemisphere between three and four billion years ago. 

Scientists have already mapped out the rough contours of what may be an ancient Martian shoreline, but a new study throws the seas into sharper relief by identifying topographic signs of a possible coastal shelf. The team argued in their study that these shelf features may be a better indicator of a past ocean than shoreline features, based on similar observations on Earth.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?
An illustration taken from orbiter data identifying the coastal shelf region on Mars. Image: A. Zaki

“Our results indicate that long-lived ancient oceans on presently arid planets may be best identified not only through discrete shorelines but also through…a global coastal shelf,” said researchers led by Abdallah Zaki and Michael Lamb of Caltech University. The study supports “the presence of an ancient ocean on the northern plains of Mars that was bounded by a coastal shelf.”

While this ocean dried up long ago, its topographic remnants are a reminder of a time when Mars was warm, wet, and perhaps, wriggling with life.

Thanks for reading! See you next week.