PandoraHealth
Safe AI for mental resilience
Reflection • Safety

Vision & principles

Reflection Safety by design Alignment with care

Reflection as a starting point

PandoraHealth develops and explores AI applications that support reflection in situations of mental and moral strain. Not every situation requires treatment; often, space for reflection is the necessary first step — to structure experiences, explore tension, and put into words what feels unresolved.

AI can play a supportive role when that role is limited, explicit, and safe. PandoraHealth does not build AI as a problem-solver, but as a tool to support self-reflection.

Safety as a design principle

Safety is not an add-on but a design choice from the start. This includes:

PandoraHealth applications are explicitly not therapy and not a replacement for professional care. They do not provide diagnoses and do not give treatment or medication advice.

Relation to professional care

PandoraHealth positions AI as a complement to existing care structures, not as an alternative. Design and pilots align with professional frameworks and the reality of care and safety-critical contexts.

Data and trust

PandoraHealth follows data minimisation: as little data as possible, for as short a time as possible. Where feasible, preference is given to temporary or local processing. Confidentiality, transparency, and user control are treated as core requirements.

Research and ethical basis

We collaborate with knowledge institutions and professionals to align applications with scientific insights, ethical guidelines, and European regulatory frameworks (including the EU AI Act). Development takes place through exploratory research and small-scale pilots.