AI Solutions

Loading

Why AI safety breaks at the system level

Why AI safety breaks at the system level

Why AI safety breaks at the system level

Two developments in AI have started to reveal a deeper shift in how intelligent systems are built and deployed. 

One model operates behind closed doors, supporting a small group tasked with securing critical infrastructure. Another operates in the open, generating software across extended sessions with minimal supervision.

Same field. Very different philosophies.

For AI professionals, this contrast highlights a more meaningful question than model benchmarks or parameter counts: 

What kind of AI ecosystem is emerging, and how does it shape the way AI systems are designed, deployed, and trusted?


The rise of system-level risk in AI

Recent research explores how AI safety at the model level does not always translate into system-level safety in real-world deployments.

A model can demonstrate strong model alignment during evaluation, yet exhibit entirely different behaviors when embedded within LLM agents. Once connected to tools, APIs, and external environments, the model operates within a broader agentic system that introduces new dynamics.

These dynamics include:

  • Multi-step reasoning across complex workflows
  • Tool use and API integration within agent frameworks
  • Persistent memory in AI systems across sessions
  • Interaction with external and unstructured data sources

Each layer adds complexity. Each interaction expands the AI risk surface.

The result is a shift from isolated model behavior toward emergent system behavior in AI. That shift carries implications for how AI governance and safety are understood and implemented.


So why is model alignment alone not enough?

Model alignment focuses on constraining outputs within acceptable boundaries. Techniques such as reinforcement learning from human feedback (RLHF), constitutional AI, and benchmark-driven evaluation aim to shape responses toward desired behaviors.

💡
Once a model becomes part of an agentic AI system, those constraints operate within a more complex loop. The model plans, acts, observes, and updates. Over time, these cycles create opportunities for unintended outcomes within AI-driven workflows.

Key factors that drive this gap include:

  • Context expansion in large language models. Agents operate across extended contexts, often combining structured and unstructured data. This creates opportunities for subtle inconsistencies to influence decisions.
  • Tool integration and execution risk. Access to external tools introduces operational risk. A safe response at the language level can translate into an unsafe action at the system level.
  • Goal persistence in autonomous agents. AI agents maintain objectives across multiple steps. Small deviations in reasoning can compound over time, leading to outcomes that diverge from initial intent.
  • Evaluation mismatch in AI systems. Many AI evaluation frameworks focus on single-turn interactions. Agent-based systems require multi-step evaluation and scenario testing to reflect real-world usage.

Together, these factors create a gap between how AI safety is measured and how AI systems behave in production.

Analytics engineering’s AI gap: Full-stack data perspective
From clean dashboards to messy intelligence systems.
Why AI safety breaks at the system level

The emergence of agentic complexity

Agent-based systems represent a transition from static inference toward dynamic execution. This shift introduces a new category of challenges in AI system architecture and enterprise AI deployment.

In traditional deployments, the model serves as a component within a controlled pipeline. In agentic AI systems, the model takes on a more active role, making decisions that influence future states and downstream actions.

This creates a form of operational complexity that resembles distributed systems engineering more than standalone models.

Core characteristics of agentic complexity in AI include:

  • Stateful AI interactions across time
  • Non-deterministic execution in LLM agents
  • Feedback loops in autonomous AI systems
  • Interdependencies between tools and model reasoning

These characteristics require a different approach to AI orchestration, monitoring, and control.


What this means for enterprise AI system design

As AI systems evolve, design priorities are shifting. Model performance remains important, yet AI system reliability, observability, and governance are gaining equal weight in enterprise environments.

A few principles are starting to define best practice in AI system design:

  • Design for containment in AI systemsSystems benefit from clearly defined boundaries around agent capabilities. Limiting access to sensitive tools and data reduces exposure to system-level risk.
  • Prioritize observability in AI workflowsDetailed logging and monitoring enable teams to understand how decisions are made across multi-step processes. This supports both debugging and AI governance frameworks.
  • Structure AI workflows explicitlyBreaking tasks into defined stages improves reliability. Structured workflows guide the model through complex processes while reducing ambiguity.
  • Align evaluation with real-world AI deploymentTesting frameworks need to reflect real usage conditions. Multi-step evaluation, red teaming, and adversarial testing provide more meaningful insights than static benchmarks.

These principles reflect a broader shift toward system-level thinking in AI engineering. The focus moves from optimizing individual models to managing interactions across the entire AI stack.

AI’s split future: Control vs autonomy in frontier systems
AI is splitting in two directions. One path is controlled, restricted, and security-first. The other is open, autonomous, and scaling fast. The real question isn’t which is better, it’s what this means for trust.
Why AI safety breaks at the system level

A new layer of responsibility in AI governance

For organizations deploying AI, this shift introduces a new layer of responsibility. AI safety can no longer be treated as a property of the model alone. It becomes a property of the entire AI system architecture.

This includes:

  • How LLM agents are configured and orchestrated
  • What tools and data sources AI systems can access
  • How decisions are monitored, logged, and audited
  • How failures in AI systems are detected and contained

This perspective aligns closely with practices in cybersecurity, risk management, and distributed systems design. It emphasizes defense in depth, continuous monitoring, and controlled deployment environments.


The path forward for agentic AI systems

The evolution of AI systems points toward a more mature phase of development. Early progress focused on expanding model capabilities and scale. The next phase focuses on integrating those capabilities into robust, production-ready AI systems.

This transition creates opportunities for teams that invest in:

  • AI system architecture and orchestration
  • Agent frameworks and workflow design
  • AI governance and compliance

It also raises the bar for what it means to deploy enterprise AI responsibly.

💡
The contrast between controlled and open deployments highlights the range of possible approaches. Some systems prioritize containment, validation, and safety-first deployment. Others prioritize accessibility, speed, and iteration.

Both approaches contribute to the evolving AI ecosystem.


Closing thoughts on AI system reliability

AI is entering a phase where system design defines success. Models continue to improve, yet their impact depends on how they are embedded within complex, real-world systems.

The concept of “safe models” remains important. At the same time, it represents only one layer of a broader challenge.

For AI professionals, the opportunity lies in bridging the gap between model capability and system reliability. That work defines the next frontier of AI engineering and deployment.

It also answers a question that continues to gain relevance: What makes an AI system truly safe at scale?

How access models are shaping AI cybersecurity deployment

How access models are shaping  AI cybersecurity deployment

What happens when advanced AI capabilities enter the cybersecurity stack at scale?

💡
Recent developments from OpenAI and Anthropic highlight a meaningful shift in how AI-powered security tools reach practitioners. The focus has moved beyond raw model performance and into a more operational question:

How is access to these systems structured, verified, and deployed?

For AI professionals, this marks an important moment. Cybersecurity AI now sits at the intersection of infrastructure, governance, and real-world application.

In other words, it has moved from interesting to essential.

So what does this mean for AI professionals?


The rise of AI-native cybersecurity tools

AI-driven cybersecurity continues to evolve from passive detection into active analysis and response. Models such as GPT-5.4-Cyber introduce capabilities that extend far beyond traditional tooling.

Security teams now have access to systems that can interpret compiled binaries, identify anomalies, and surface vulnerabilities without requiring source code.

This represents a meaningful acceleration in workflows that previously required manual reverse engineering and deep domain expertise.

The result is a shift toward AI-augmented security operations, where analysts operate alongside models that continuously evaluate and interpret complex systems. The coffee consumption may stay the same, yet the output per analyst looks very different…

Why AI safety breaks at the system level
AI safety shifts from the model to the system level. As AI becomes agentic and tool-driven, risk emerges from complex interactions, widening the gap between evaluation and real-world behavior.
How access models are shaping  AI cybersecurity deployment

Two emerging approaches to access

As these capabilities mature, different deployment strategies are taking shape. The contrast reflects a broader design decision within AI cybersecurity.

Some platforms emphasize controlled distribution, where access is limited to a small group of verified organizations. This approach prioritizes tight oversight and curated usage environments.

Others adopt a broader access model, where entry is granted through identity verification and structured onboarding. This approach focuses on enabling a wider pool of security professionals to leverage advanced tools.

💡
Both strategies reflect valid priorities. Each introduces distinct considerations for scalability, collaboration, and operational readiness.

What this means for AI professionals

For practitioners, access models now play a central role in how cybersecurity systems are integrated into existing workflows. The conversation has expanded from capability evaluation into deployment strategy.

Security leaders and AI engineers increasingly evaluate questions such as:

• How AI tools integrate into existing security pipelines and SIEM platforms• How identity verification frameworks support controlled access at scale

• How model outputs align with internal validation and audit processes

• How teams manage collaboration between human analysts and AI systems

These considerations highlight a broader trend. AI cybersecurity requires alignment across engineering, security, and governance functions. Silos rarely perform well under pressure, and as we all know, cybersecurity provides plenty of pressure.

3 easy ways to get the most out of Claude code
Everyone is talking about Claude Code. With millions of weekly downloads and a rapidly expanding feature set, it has quietly become one of the most powerful tools in a developer’s arsenal. But most people are barely scratching the surface.
How access models are shaping  AI cybersecurity deployment

The operational impact on security teams

AI-powered cybersecurity tools introduce measurable improvements in speed and coverage. At the same time, they reshape how teams approach daily operations.

Routine analysis tasks can be automated or augmented, allowing analysts to focus on higher-value investigations. Pattern recognition and anomaly detection benefit from continuous model evaluation, providing earlier visibility into potential threats.

At the same time, teams gain the ability to inspect complex systems with greater depth. Reverse engineering, malware classification, and vulnerability detection become more accessible across a wider range of skill levels.

This evolution supports a more distributed model of expertise, where advanced capabilities extend across the organization rather than remaining concentrated in specialized roles. More eyes on the problem, fewer bottlenecks in the process.


Key considerations for implementation

As organizations adopt AI-driven cybersecurity tools, several practical considerations come into focus:

• Integration: Alignment with existing infrastructure, including cloud environments and security platforms

• Validation: Processes for verifying model outputs and ensuring reliability in high-stakes scenarios

• Access control: Mechanisms for managing user permissions and maintaining secure usage

• Monitoring: Continuous oversight of model behavior and system performance

These factors shape how effectively AI systems contribute to security outcomes. Strong implementation frameworks support both performance and trust.


Building trust in AI-driven security systems

Trust remains a central component of AI adoption in cybersecurity. Teams rely on systems that operate consistently, transparently, and with measurable accuracy.

Clear audit trails, reproducible outputs, and well-defined evaluation metrics contribute to confidence in AI-generated insights. Structured access models further support trust by ensuring that usage aligns with organizational policies and standards.

As AI systems take on more responsibility within security workflows, trust becomes an operational requirement rather than a conceptual goal.

AI’s new era: Train once, infer forever in production AI
Why the future of AI systems will be driven by inference and agent workloads.
How access models are shaping  AI cybersecurity deployment

Looking ahead: Access as a design decision

AI cybersecurity continues to evolve rapidly, with new models and capabilities entering the landscape at a steady pace. Alongside this growth, access models have emerged as a defining factor in how these systems are used.

For AI professionals, this represents a shift in focus. Technical capability remains essential, while deployment strategy now carries equal weight. Decisions around access, verification, and integration shape how effectively AI contributes to security outcomes.

The next phase of AI cybersecurity development will likely bring further innovation in both capability and delivery. Teams that approach access as a core design decision will be well-positioned to adapt and scale.

Innovation in AI cybersecurity continues to accelerate. With the right access models in place, organizations can translate advanced capabilities into practical, high-impact security outcomes.

And ideally, sleep a little better at night…

Behind the Blog: Jazz and Journalism


Behind the Blog: Jazz and Journalism

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss the Madonna-whore algorithm, reader tips, and jazz.

SAM: Yesterday morning I published a story I started working on weeks ago and only in the last week or so felt enough distance from the topic to be able to articulate it clearly: My year in the wedding planning social media abyss. The piece is a long, more sourced BTB, and I don’t have a ton to add to what’s said in it, but I do want to highlight some of the comments I’ve gotten so far that touch on things the story doesn’t elaborate on.

The Destroyed Remnants of a Lost World Are Falling to Earth, Scientists Discover

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

The Destroyed Remnants of a Lost World Are Falling to Earth, Scientists Discover

The remnants of a bizarre long-lost world that fell apart before our planet was fully formed are falling to Earth in the form of meteorites, according to a new study in Earth and Planetary Science Letters

For decades, scientists have puzzled over the origin of angrites, a rare class of about 70 meteorites with unique volcanic compositions that suggest they were forged in a large ancient object with differentiated layers, including a metallic core and a magma ocean.

Scientists have long assumed that this object, the so-called angrite parent body (APB), was roughly a few hundred miles across, similar in size to the asteroid 4 Vesta. But researchers recently raised the tantalizing possibility that the APB might have been much larger, perhaps on the scale of Earth’s moon.

Now, a team led by Aaron Bell, an experimental petrologist and an assistant research professor at the University of Colorado, Boulder, has discovered “the first unequivocal evidence supporting the large angrite parent body hypothesis, which posits that the angrites are samples derived from a protoplanet that was catastrophically disrupted during the earliest evolutionary stages of the inner solar system,” according to the new study.

“It probably got destroyed in the early solar system, so [angrites] are remnants of a lost protoplanet,” Bell said in a call with 404 Media. “A few pieces broke off and are now in the asteroid belt, and a few of them have come to Earth, and we’ve picked them up.”

Angrites date back about 4.56 billion years, making them among the oldest known volcanic rocks. They belong to a class of stony “achondritic” meteorites that contain the crystalized signatures of melted rock, such as basalts, hinting that they originate in larger bodies that underwent some degree of planetary processing and layered differentiation, even if those early planetary embryos never accreted into full planets. 

“Angrites are interesting in that they don’t have a known parent body,” Bell said. “It’s never been definitively identified, and that’s one of the mysteries.”

“There are a bunch of arguments about why angrites are so geochemically unusual,” he added. “They’re kind of this oddity.” 

Most models of early planetary accretion predict that relatively small objects formed within the first few million years of the solar system, which is why the APB was assumed to be an asteroid-sized object, rather than a much larger nascent planet.

While working on a previous study, Bell became interested in an aluminum-rich angrite from Northwest Africa, known as NWA 12,774, which was classified in 2019. The meteorite is one of a handful of unusual primitive angrites that appear to have been crystallized at high pressure within the APB, indicating that it formed deep under the surface and therefore might shed light on the size of this bygone world.

“Even among angrites, there’s only four or five that have these primitive compositions,” Bell said, adding that the meteorite had “off-the-charts aluminum content, which is really very unusual.”

Bell and his colleagues developed a geobarometer—a tool that calculates the pressures at which rocks and minerals formed—-that estimated it would take at least 1.7 gigapascals to account for the rock’s special properties. This pressure corresponds to an object with a minimum radius of 620 miles (1,000 kilometers), which is just under the size of Pluto. The APB may even have been as large as the Moon, which has a roughly 1000-mile radius. 

“Clearly, within the first few million years of solar system evolution, you could grow planetary embryos that were 1,000-plus kilometers” in radius, Bell said. “We’re talking within three million years of the condensation of the first solids in the solar system, so it’s right at the beginning.”

The discovery suggests that the APB may have been a first-generation protoplanet that coalesced and shattered millions of years before the familiar worlds of our solar system took full shape. Judging by the strange properties of angrites, the APB was also on track to be a very different kind of world than Earth and its neighbors, had it survived the chaotic environment of its infancy. 

Angrites are “geochemically fundamentally different, and that’s why people were interested in the first place—because they were odd,” Bell said. “They don’t look like garden-variety

basalts you get from Mars or the Moon or Earth.”

“It’s sort of this path not taken—or maybe it was, but we just have a couple pieces of it that tell us something we didn’t know,” he concluded. “There were once large bodies that, maybe, didn’t look like the terrestrial planets.” 

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

FAA Scraps Civil and Criminal Penalties for Flying Drones Near ICE Vehicles


FAA Scraps Civil and Criminal Penalties for Flying Drones Near ICE Vehicles

On Wednesday the Federal Aviation Administration rescinded a temporary flight restriction (TFR) that created a no-fly zone within 3,000 feet of “Department of Homeland Security facilities and mobile assets.” The new restriction softened the language of the original and abandoned the threat of civil or criminal penalties but added the Department of Justice to the list of protected agencies.

A 2025 TFR restricted the presence of drones around Department of Energy and Pentagon assets. The FAA added ICE and CBP to the list of restricted agencies in January as ICE began operations in Minneapolis. The no-fly zone covered 3,000 feet around any ICE vehicle. Anyone who was caught violating it could be fined or jailed. Because ICE agents often drive through the city in unmarked vehicles it was impossible for drone operators to know if they were violating the order and local journalists who use drones to take pictures and monitor law enforcement activities were grounded.



Earlier this month, Minnesota journalist Rob Levine sued the FAA over the TFR. In a motion filed earlier this week, Levine’s lawyers argued that the FAA had violated his rights and should rescind the restrictions. Core to their argument was the unmarked vehicles which they said created a “flotilla of invisible, moving bubbles,” according to court documents. “Under any standard, the TFR’s chilling sweep violates the First Amendment as applied to the Petitioner’s use of drones in photojournalism.”

The FAA replaced the TFR this week after Levine’s lawyers filed the motion. The new advisory lessened restrictions, including dropping the language around 3,000 feet and criminal penalties, but expanded the amount of protected assets. 

“UAS operators are advised to avoid flying in proximity to: Department of War, Department of Energy, Department of Justice, and Department of Homeland Security covered mobile assets,” the new TFR said. “UAS operators who fly within this airspace are warned that…DOW, DOE, DOJ, or DHS may take action that results in the interference, disruption, seizure, damaging, or destruction of unamended [aircraft] deemed to pose a credible safety or security threat to covered mobile assets.”

Despite the threat to shoot journalist’s drones out of the sky, Levine and his lawyers see the new TFR as a victory. “This is a big win. It was heartbreaking to have my drones grounded at a time of such importance to my community, but I’m looking forward to getting back up there and getting back to my journalism as soon as possible,” Levine said in a statement provided to 404 Media.

Grayson Clary, a lawyer with Reporters Committee for Freedom of the Press who took on Levine’s case, said there is still work to do. “We’re glad to see the FAA rescind its original order, which was an egregious overreach that had serious consequences for reporters nationwide. But this kind of arbitrary back-and-forth from the FAA is exactly the problem, and we intend to make clear to the D.C. Circuit that this restriction never should have been implemented in the first place,” he said.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?

Welcome back to the Abstract! These are the studies this week that peacefully passed the crown, predicted trouble on the horizon, gave life after death, and coastally shelved an idea.

First, scientists watch a succession story play out for years in a naked mole rat colony. Then: prediction markets as a public health threat, the thorny questions of posthumous reproduction, and a walk on the shores of an ancient alien seas.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files.

Digging into the palace intrigue of a rodent realm 

Abeywardena, Shanes C., M. Schraibman, Alexandria et al. “Peaceful queen succession in the naked mole rat.” Science Advances.

Murderous queens. Bloody power struggles. Strictly enforced hierarchies. I’m speaking, of course, of naked mole rats, a bizarre species of rodent that becomes embroiled in violent conflicts over the succession of one breeding queen to the next. 

Though aggression in succession is the norm for these animals, scientists now report a rare peaceful transition of power from one queen to her daughter in a captive colony. 

The discovery suggests that “the less common peaceful trajectory to queen succession…is possible under some conditions” especially when “aggression-based enforcement may be insufficient or unnecessary and when the cost of a ‘war’ may be too high,” according to the new study.

As we’ve covered before on the Abstract, mole rats (both the naked kind and the non-naked kind) are the only mammals to live in eusocial colonies similar to bees or ants, meaning they are reigned over by one breeding queen and her subordinate workers. In addition to this unique social structure, mole rats display a number of fascinating behavioral and genetic adaptations, including long lifespans and low rates of cancer, which has made them a popular species for research.

Naked mole rats may not look all that intimidating, but when it’s time to anoint a new queen, the fur starts to fly (or it would, if these animals had any fur). If a queen dies or is deposed by rivals, subordinate females in the colony battle to take the throne.

But scientists co-led by Shanes Abeywardena and Alexandria M. Schraibman of the Salk Institute for Biological Studies observed a different succession story that unfolded over many years in the Amigos captive colony housed in San Diego. 

Starting in 2019, a queen named Teré reigned over the colony and produced many healthy pups. Once the colony became crowded, with nearly 40 members, Queen Teré began delivering litters with no surviving pups. When the researchers removed half of the members, she began to produce surviving pups again, though not many. The team then deliberately introduced another stressor by moving the colony to a new facility in 2022, which ceased Queen Teré’s fertility.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?
Summary of the Amigos colony’s succession story. Image: Abeywardena, Shanes C., M. Schraibman, Alexandria et al.

In response, Alexandria, one of Teré’s daughters, became pregnant in 2023 and 2024, but her litters also produced no survivors, and she had to be euthanized in 2024 due to a uterine torsion. Finally, the long reproductive hiatus was ended after three years by the ascension of Alexandria’s sister, Arwen, who became Queen Arwen upon her delivery of healthy pups in October 2025.

“Aside from a single incident on 6 February 2025 in which one animal was found with a superficial bite wound and dried blood around the face, an injury that resolved without recurrence, no aggression or dominance related conflict was observed,” the researchers said. “Instead, Queen Teré was reported to exhibit ‘guarding’ behavior of Arwen and her litter. No other signs of social instability, behavioral escalation, or colony-wide distress were documented.”

“Together, these observations indicate that following the decline of Queen Teré’s reproductive capacity and the loss of the intermediary breeder Alexandria, Arwen successfully assumed the reproductive role without eliciting aggression from the reigning queen or from other colony members,” the team concluded.

The study is an antidote to the story we covered last week about a lethal chimp “civil war,” demonstrating that animals with strict dominance structures choose peace over violence in some cases. My only note is that Teré’ be given the honorific Queen Mother for her service.

In other news…

The over/under on predication markets

Packin, Nizan Geslevich and Rabinovitz, Sharon. “Prediction markets as a public health threat.” Science.

Prediction markets (PMs) are exploding in popularity, but researchers warn that the “addictive design, vulnerable users, and permissive regulatory environments” that characterize these markets “are a well-established formula for population-level harm,” according to the Policy Forum section of the journal Science

PMs operated by companies like Kalshi or Polymarket “pose underappreciated threats to democratic integrity” and are linked to “addictive behaviors,” according to authors Nizan Geslevich Packin of Baruch College Zicklin School of Business and Sharon Rabinovitz of the University of Haifa. For instance, PMs can enable insider trading about classified government information and expose millions of users to the risk of addiction and major financial losses.

“A public health approach reframes PM risks as predictable outcomes of environmental design, analogous to tobacco control’s success in treating smoking as population-level exposure rather than individual vice,” the team argued in the article. 

“The window for precautionary action is closing,” the researchers emphasized. “Each week of billion-dollar PM activity…prolongs a large uncontrolled experiment on users.”

It remains to be seen whether this warning about the dangers of a wild new industry will materialize into meaningful regulatory action. Want to make a bet?

Creating new life after death

Bamford, Sandra Carol. “Spectral Connections: Anthropological Engagements with Posthumous Reproduction.” Cambridge Archaeological Journal.

Posthumous children—children born after the death of one or both parents—are popular in myth and fiction, from the Greek Dionysus to more modern characters like John Connor or Daenerys Targaryen. 

But this is also a real demographic of people that may evolve in interesting ways as reproductive technologies enable larger numbers of posthumous conceptions—in which the sperm and egg donors for an embryo may be deceased, such as the case of a boy born in 2018 whose mother and father had both died years earlier in a car crash.

In this way, “frozen sperm, eggs (or embryos) are, at one and the same time, both alive and dead,” said Sandra Bamford of the University of Toronto in a new anthropological study of the topic. “Through their frozen gametes and the potential of new kin connections in the future, the dead remain as active participants influencing the lives of the living.”

The study, which is part of a broader journal issue exploring kinship, pulls together many intriguing case studies, including the “Nuer ghost marriage” practices of Sudan, in which a deceased man can be considered the father of a kinsman’s children, or the case of William Kane, who bequeathed frozen sperm to his girlfriend, sparking a legal battle with his adult children after his death by suicide. 

In other words, the legal, ethical, and practical implications of posthumous conception are still very much in flux, raising thorny questions about when, and how, the dead can produce new life. For instance: the ambiguities over judging the consent of a deceased person over the use of their posthumous gametes; the rights of posthumously conceived children to be named heirs of estates; and the possible emotional and psychological toll on posthumously conceived children, along with their family members.  

The Rime of the Really Ancient Mariner 

Zaki, Abdallah S. and Lamb, Michael P.  “Identifying the topographic signature of early Martian oceans.” Nature. 

We’ll close, as all things should, with waves lapping on long-lost alien shores. The surface of Mars is etched with the memory of rivers, lakes, and perhaps even an expansive ocean that may have covered much of its northern hemisphere between three and four billion years ago. 

Scientists have already mapped out the rough contours of what may be an ancient Martian shoreline, but a new study throws the seas into sharper relief by identifying topographic signs of a possible coastal shelf. The team argued in their study that these shelf features may be a better indicator of a past ocean than shoreline features, based on similar observations on Earth.

Babies Born from Dead Parents Will Increase with New Tech. Are We Ready?
An illustration taken from orbiter data identifying the coastal shelf region on Mars. Image: A. Zaki

“Our results indicate that long-lived ancient oceans on presently arid planets may be best identified not only through discrete shorelines but also through…a global coastal shelf,” said researchers led by Abdallah Zaki and Michael Lamb of Caltech University. The study supports “the presence of an ancient ocean on the northern plains of Mars that was bounded by a coastal shelf.”

While this ocean dried up long ago, its topographic remnants are a reminder of a time when Mars was warm, wet, and perhaps, wriggling with life.

Thanks for reading! See you next week.