Collapse or Flourish?
A conversation about our collective future and what we're actually building towards
There's a growing disconnect between our daily decisions and the long-term trajectory they create. As organizations and individuals navigate rapid technological change, we're often not connecting the dots between what we build, fund, buy, and support today, and the future we're helping create. This conversation is about bridging that gap.
Register for the DiscussionWhat we'll cover
Where we are right now
The decisions being made today across technology, business, and policy are shaping outcomes decades into the future. Yet there's often a gap between short-term thinking and long-term consequences. Here's what the current data shows:
The risk landscape
Leading AI researchers and technology leaders estimate between a 5-80% probability of human extinction or severe disempowerment by the end of the century due to how artificial intelligence is being developed and deployed. These aren't fringe predictions, they're assessments from people building these systems.
The wide range reflects genuine uncertainty about how technology will evolve and how society will respond. What's clear is that the trajectory isn't predetermined, it depends on choices being made now.
The opportunity
The same technologies creating risk also offer unprecedented potential for human flourishing. AI could help solve major challenges around climate, disease, inequality, and more. The outcomes aren't binary, they exist on a spectrum shaped by how these tools are developed, deployed, and governed.
This isn't about being optimistic or pessimistic. It's about being intentional.
Interconnected challenges
Beyond AI, we're facing climate change, biodiversity loss, threats to cognitive autonomy, widening inequality, and challenges to democratic systems. These issues don't exist in isolation. They interact in complex ways, and technological solutions in one area can create problems in another.
Technology can be part of addressing these challenges, but only if we're thoughtful about implementation and unintended consequences.
The pattern we keep seeing
In conversations across sectors, a consistent gap emerges between day-to-day decision-making and long-term impact awareness. This plays out differently depending on who you talk to:
In AI safety
People working on AI safety often face innovations that reduce risk in one dimension while potentially increasing it in others. The tradeoffs aren't straightforward. When is a risk too significant to accept? How do you weigh competing concerns when the stakes are existential?
These aren't just philosophical questions. They're practical challenges that researchers and organizations face regularly.
In business and investment
Entrepreneurs, investors, and business leaders are typically optimizing for near-term outcomes. Quarterly results, product launches, funding rounds. The connection between these decisions and their contribution to larger patterns (positive or negative) often isn't explicit or measured.
This isn't a criticism. It's a structural challenge. The feedback loops are long, and the frameworks for thinking about long-term impact aren't always readily available.
The central question
How do we adapt design, innovation, technology development, policies, governance structures, and organizational behaviors to mitigate existential risks while contributing to human flourishing? More specifically, what does this look like in practice, in the actual decisions being made day-to-day?
The Collapse to Flourishing Index
We developed the CFI Framework to help individuals and organizations systematically evaluate how their activities contribute to collapse or flourishing across critical dimensions. The framework is designed to be rigorous enough to be useful while remaining accessible enough to actually use.
The CFI evaluates impact across six interconnected dimensions:
D1: AI & Autonomous Systems
Assesses risks from AI takeover, authoritarian use, malicious deployment, and systemic collapse. Also evaluates potential for AI to support human flourishing through safety measures, transparency, alignment, and beneficial applications.
D2: Hospitable Planet (Climate & Nature)
Evaluates impact on climate stability, biodiversity, pollution, resource use, and environmental protection. Considers both direct and indirect effects on planetary systems that support life.
D3: Basic Rights, Governance & Democracy
Examines effects on information integrity, privacy, platform governance, algorithmic transparency, and democratic processes. Assesses whether technologies strengthen or undermine rights and democratic institutions.
D4: Functional Economies & Shared Prosperity
Looks at benefit distribution, economic opportunity, labor practices, market concentration, and whether economic systems are creating broadly shared prosperity or increasing inequality.
D5: Psychological Sovereignty & Social Cohesion
Evaluates impacts on cognitive autonomy, mental wellbeing, information quality, social trust, and community strength. Considers whether technologies enhance or diminish human agency and social connection.
D6: CBRN & Biosecurity
Assesses Chemical, Biological, Radiological, and Nuclear risks. Examines access controls, safety protocols, incident response, and oversight mechanisms for potentially catastrophic technologies.
Try the assessment: We've created a structured evaluation tool where you can assess organizations across these dimensions. It takes 10-15 minutes and provides a systematic way to think through these questions.
What this session covers
This will be a working session, not a lecture. Here's the structure:
Framework overview
A concise walkthrough of the CFI framework and how it can be applied. This will be brief (15-20 minutes) to leave time for the more valuable parts: discussion and collaboration.
Open discussion
Participants share perspectives on these challenges and opportunities. The goal is to surface different viewpoints, identify blind spots, and learn from each other's experiences applying these concepts in practice.
Framework feedback
An opportunity to critique the CFI approach, suggest improvements, and identify gaps. The framework is designed to evolve based on real-world application and diverse input.
Collaborative brainstorming
Exploring initiatives, interventions, and approaches that could help shift trajectories towards flourishing. What solutions are we not considering? What obvious approaches are we missing? What might actually work?
Join the conversation
Whether you're building AI systems, making investment decisions, developing policy, or running an organization, your perspective is valuable. The more diverse the viewpoints in this conversation, the more useful it becomes.
If we can get more people thinking systematically about long-term impact, connecting daily decisions to larger patterns, we have a better chance of steering towards better outcomes. That's the goal here: practical frameworks for intentional decision-making about the future we're creating.
Register for the DiscussionWays to get involved
Beyond this discussion, there are multiple ways to engage with this work:
Partnership opportunities
Interested in collaborating on this event, future events, or values-aligned programs focused on mitigating catastrophic risks? We're looking for partners who want to work on practical approaches to these challenges.
Get in touch: partner@bloom.pm
Support this work
Bloom operates as a hybrid nonprofit and social impact enterprise. Donations to the nonprofit support this research and convening work and are tax-deductible.
Learn more: bloom.pm/donate