If you’re interested in technological innovation, and having a framework to think how to steer it, participate in it, or capture value from it, you might find this “map” useful.
It’s formatted as a laundry-list of topics that seem essential to consider to develop a holistic view on the nature, status, merits, and potential of emerging technologies — feel free to skip over sections or double-click into topics, or reach out to chat about an individual line-item or section.
Table of contents:
Machines: Machines as powerful tools and entities
Humans: Unlocking human potential using technology
Attitudes: Examining philosophies towards emerging technology
Environments: Studying the spaces in which the future will unfold
1. Machines as powerful tools and entities
1. a. Studying machines as powerful tools
From a commercial lens:
The supply chain for AI tools or the AI “stack”, and where there is potential for value capture
Illustrative use-cases at the devtool and production layer: development frameworks and infrastructure for building and training models (e.g., PyTorch, TensorFlow); model training platforms, providing environments to train models at scale; tools supporting MLOps
Illustrative use-cases at the application layer:
B2B: Coding assistance & co-pilot, workflow optimization (across industries), document management, etc.
D2C: Conversational interfaces, recommendation systems, multilingual capabilities, virtual assistants, social media representatives, etc.
Mapping out what the “future of work” looks like, and associated services
Thesis: If the “future of work” could constitute increasing remote and decentralised work, then investors should look at immigration companies looking to resolve inefficiencies in talent distribution (see Plymouth Street, tukki.ai), or HR-tech for global hiring (see Deel)
From an academic lens:
Understanding the current status technical AI research (lots of catching up to do on Import AI); specifically:
The different types of models (e.g., traditional neural networks vs. transformer models) and model architecture
Mitigating bias, increasing interpretability, and increasing trustworthiness of output
How models are trained (e.g., training mechanisms, parameters, fine-tuning)
Understanding the components of the AI tech stack (excited to dive into Karpathy’s LLM101n)
From a design perspective:
Technology alignment: How can we design consumer platforms so that they are less dopamine-hackey and more seratonin-inducing (inspiration tweet)? Potential solutions:
Redesigning: New value propositions (e.g., requiring payment, ability to turn off recommendation systems), etc.
Customizing: Complete user customization of their virtual experiences (for example, the ability to set unevadable screentime barriers; see onesec app)
Circumventing: A single LLM (“assistant bot”) to scour your social media accounts and summarise important posts, messages, and notifications, and also post on your account given specific instructions
Implications of commercial adoption:
Economic:
The nature of technological and cognitive unemployment, and alternate economic systems or measures (e.g., UBI)
What a shift to AGI means for economic output & wages (Korinek’s work here)
Philosophical:
Legal
Misrepresentation (with deepfakes across audio, images and videos)
Potential copyright infringement (with scraping data across the web for model training)
1. b. Studying machines as potential entities, specifically:
The conditions for machine sapience (intelligence) and sentience (consciousness) — determining boundaries and benchmarks to identify markers of intelligence and consciousness could inform how we ought to treat machines as moral patients, or manage risk around them
Measuring machine intelligence:
Dennett’s “quick-probe” Turing Test said that passing the Turing Test is necessary and sufficient for intelligence; objections include:
Block’s “Blockhead” argument: The Turing Test does not indicate intelligence since a machine can be pre-programmed with every possible sentence; so the Turing Test could be passed without intelligence
Searle’s Chinese Room Argument: Through a thought-experiment, shows that the Turing Test must be false since it can be passed by simply simulating intelligence vs. a machine understanding or having intentionality
Ada Lovelace believed that creativity could be used as a proxy for intelligence, and proposed the Lovelace test — according to this, if a machine could go off-script and produce creative output such that programmers could not explain how it came up with the answer, then it could be considered intelligent
Measuring machine consciousness:
How to measure consciousness in machines (see Rob Long’s Consciousness in AI paper)
Studying AIs as moral agents, including:
The severity, scale, and timelines of AGI and superintelligence
Some levers mediating AGI and superintelligence timelines include:
Data — specifically, data volumes and dataset complexity
Data volumes: available data volumes may increase with advancements in data collection techniques and improvements in synthetic data generation
Complexity of datasets — may increase with:
Data processing advancements (automated annotation tools) and crowdsourcing development / annotation
Multimodal data integration (e.g., text + audio + images)
Trends in algorithmic progress
Fleets of automated AI researchers could compress a decade of advances into a year — so there may be high returns on training algorithms specifically for AI research
Open question: How much are AI capabilities driven by better hardware letting us scale existing models (via compute), and how much is driven by new ML ideas (via algorithms)? (Inspiration blog post)
Consumer demand, as measured by revenues from AI products, prompting tech giants to continue pouring investment into capex
Investment by big-tech and governments into compute, datacenters, and associated capex (land, permitting, buildings, power, cooling infrastructure). Supply-side bottlenecks could include (see situational-awareness.ai):
Power: spare capacity is limited and total electricity generation is growing slowly (<3% over the next decade)
GPUs: AI chips require specialized memory and packaging specs, of which there is a limited pre-existing supply; Aschenbrenner predicts that TSMC would have to double production speeds to keep up with AI chip demand
In addition to AI timelines, it’s worth exploring takeoff speeds, what it takes to create Friendly AIs, and the threats (potentially existential) posed by unFriendly or malevolent AIs (see the case of the paperclip maximiser)
Questions around moral obligation, trustworthiness, and responsibility, as machines increasingly become autonomously operating entities in society (e.g., robotaxis)
The human impacts of being surveilled and perceived by a machine
Corporate and government surveillance, such that machines collect data for an instrumental end for consumption by humans
Agentic machines, paying attention to a human for an intrinsic purpose (for example, to collect data to inform its own behaviour, form judgments, etc.; see the Convivial Society)
Studying AIs as moral patients in society
Theories of moral patiency: Understanding how to classify an AI as an entity (e.g., person, pet, and property) in society, and the degree of moral treatment it is entitled to (e.g., that of a plant, snail, dog, or human)
Rationale: Examining why it is important to study moral patiency and the rights of digital people (e.g., reducing cross-species suffering, increasing the probability that agentic AI systems treat humans better, etc.)
Other topics
Can robots have morally relevant properties and abilities? Can they simulate morally relevant properties?
What is the future of robot and person relationships? (As friends, colleagues, and romantic partners? See replika.ai, friend.com)
Studying human minds to inform our understanding and design of synthetic minds, including:
A history of cognitive science, and evolution in our understanding of the brain (e.g., symbolism vs. connectionism; cognitivism vs. behaviourism, etc.)
Personal identity: Conditions for personal identity continuity, and the nature of immortality
The philosophy of mind
Intelligence: Types of human intelligence (crystallised and fluid), measures of intelligence, etc.
Consciousness:
Theories of consciousness (functionalist & computational, biological)
The structure of consciousness (e.g., unified vs. decentralized (as in octopuses and split-brained patients))
Panpsychism, and collective and recursive consciousness (see short-story here)
Qualia and phenomenal consciousness
Mental states: emotion, perception, memory, intentionality
Studying animal minds, to improve an understanding of consciousness and subjective experience in non-human minds
Objections to creating AGI
Understanding the current status of AI safety and alignment research, intended to mitigate risks from developing superintelligent AI; for example:
Progress on the outer alignment / reward misspecification problem, which intends to create a reward structure for a model such that there are no exploitable loopholes (may entail conveying the sum of all human values and ethics). See Deepmind’s research into specification gaming
Progress on the inner alignment / goal misgeneralization problem, which works to ensure that a reward function actually tries to act in accordance with human preferences (Source)
Other relevant concepts: instrumental convergence, orthogonality, RLHF, red-teaming, compute-overhang
Orgs working on AI safety research & principled development: CHAI, OpenAI, Ought, MIRI, FLI, Anthropic, ssi.inc

2. Unlocking human potential using technology
2.a. Longevity and life extension (increases in the quantity and quality of human hardware), through:
Classic aging research (e.g., reversal of cellular senescence, telomeres, etc.)
Companies: see Retro Biosciences and Blueprint protocol
Preventive healthcare innovations and tech to combat disease; examples include:
Advanced imaging technologies (e.g., Ezra does full body MRI-screening for early cancer detection; Qure.ai supports AI-based disease screening)
Research tools to speech up drug discovery and delivery (e.g., Dyno Therapeutics, which provides AI-powered gene therapies and tools for gene therapy research)
Med devices, including brain-computer interfaces (e.g., Integral Neurotech is building a deep-brain interface to treat neurological and psychiatric disorders)
2.b. Cognitive enhancement (enabling the longevity and performance of software), through:
Biological cognitive enhancement
Nootropics (e.g., drugs)
Natural cognitive enhancement techniques and lifestyle changes (see “How to increase neurogenesis naturally”)
Whole brain emulation and mind uploads
Wearables (high-friction in improving information exchange)
Conducting thinking and research in pre-paradigmatic fields like neurotechnology (see Cvitkovics’s The goals of neurotechnology)
Brain-computer interfaces (low-friction in improving information exchange)
Treatable problems include neurological disease (e.g., stroke, epilepsy, aphasias), neuropsychiatric disorders (e.g., anxiety, depression, phantom limb syndrome), etc.
Augmentation use-cases include controllable moods (temporary; see Penfield Mood Organ), customizable personality traits and state of being (persistent), memory salience, selective deletion, and external storage (see the extended mind theory)
2.c. Implications of technology on humans
Language
Autocorrect, auto-text completion, and text suggestions changing the way we speak, or classically conditioning us to adopt a single tone of voice (related study by Harvard School of Engineering here)
Examining if voice-agents make speech more instructional and transactional (apparently not yet), and how this might change as voice-agents become anthropomorphized and multi-modal
Decision-making
How does attention-selection while scrolling transfer to non-feed paradigms? What are the implications of other cognitive micro-decisions that occur when using social media, and how might these scale up when we’re exposed to increasingly immersive, generative tech? (Link to tweet thread w/ musings)
3. Attitudes towards technology
Merging with the machine, and the idea that technology is making people more uniform and alike, and machines customizable and unique-seeming
Open question: Should we become cyborgs? Are we already cyborgs?
Anthropocentric vs. technocentric attitudes to technology:
Transhumanism vs. posthumanism:
Transhumanists believe that the ultimate end is the survival and future of humanity. Technology exists as a means to a human-focused end
Related projects: life extension, brain-computer interfaces, wearables, prosthetics, cryonics, whole-brain emulation (mind-uploads), and other technologies that augment human function
Posthumanists believe that the ultimate end is the survival of the strongest form of consciousness; whether or not it is human consciousness. They believe that a species greater than humans can exist, and that it is our responsibility to build and nurture it
Related projects: They believe in moonshots and deep tech. They want to mine the moon, build AGI, and create quantum computers. Advanced tech is a goal, rather than a means to an end.
Maturity of philosophy: Limited; different philosophers seem to have classified “transhumanism” and “posthumanism” differently than it is used here. These categories seemed like a useful way to categorize emerging attitudes and projects, but could be reductionist, given that posthumanist and transhumanist tendencies are likely to exist on a spectrum.
Effective altruism, effective accelerationism, progress studies
Effective altruism
Philosophy: Effective altruism (EA) is grounded in utilitarianism, and focused on neglected, high-impact, and tractable research areas (“cause areas”); as such, EAs believe in reducing the total amount of suffering in the world. Consequently, they advocate for regulated, principled technology development (“techno-carefulism”) to prevent the low probability of an existentially catastrophic technology being created.
Maturity: EA is a well-established philosophy, that has received contributions or academic engagement from renowned philosophers (e.g, Peter Singer), technologists (e.g., Dustin Moskovitz) and funders. Since ~2014, companies, research labs, and philanthropic orgs (see here) have been built around the philosophy.
Effective accelerationism
Philosophy: Pro-progress view to building emerging technology and new forms of consciousness; proponents are anti-regulation, and believe that accelerated technological progress can significantly extend the survivability of a consciousness in this universe; whether or not it is a human consciousness is less significant (see more here and here)
Maturity: Twitter-based; “E-acc” discourse emerged around ~2022, and remained popular on twitter. Notable VCs (e.g., Marc Andreesen) have engaged with the philosophy, and founder Beff Jezos (pseudonym) created Extropic.ai based on its principles.
Progress Studies
Philosophy: Pro-progress approach to innovation, with an emphasis on the potential of advancement and innovation in improving the quality and quantity of human life. Proposed as an academic field by Tyler Cowen and Patrick Collison.
Maturity: Emerging; Some discourse and organizations forming around the movement (see more by Tyler Cowen and Jason Crawford)
4. Studying the spaces in which the future will unfold
4.a. Simulation argument:
Definition: The idea that our subjective experience of the universe is a product of a machine.
Types of simulations
H-style simulations:
The mind is a simulated reality; we can escape the simulation to enter base-reality through mindfulness and meditation (Buddhism and Vedanta)
The simulation as an experience machine
Is entering the simulation all-or-nothing, or does it come in grades (see tweet)? Is entry into manmade or digital spaces correlated with less control over one’s subjective experience?
Are you already living in the experience machine?
S-style simulations: Bostromian argument that our universe is an instantiation of a software simulation being run by more advanced beings
Creating and running a instantiation of a life-form within our simulation (e.g., AGI) could lend evidence to the idea that nested realities are possible
4.b. The digitization of physical spaces
Augmented and virtual reality
3D technology and immersive spaces
Mapping (see Matterport)
3D product scanning and editing (see Avataar.ai)
Navigation (see Google Map’s latest AR tech)
Form factors: Meta glasses, Meta Quest, Apple Vision Pro
4.c. The shift away from physical spaces to virtual spaces
Thesis: As screentime goes up and the embedding of devices into daily routines (for utility, connectivity, work, and leisure) increases, we are spending more cumulative attention time directed towards screens and devices; therefore, we exist more in virtual spaces.
Implications of living in virtual spaces
A quantification of the self, and humans at scale, as more time is spent on digital platforms and in virtual spaces (e.g., behaviours can be catalogued, tendencies can be computed, and future actions could be predicted; also, we could know ourselves better)
Identity
A persistent identity across virtual spaces
E.g., a web3-esque technology allowing interoperability across applications in digital spaces, a single access-point or port of entry, and a mechanism of ownership
A convergence of real and digital identities
Friendships
Internet friends and relationships
Value
See: Chalmer’s Reality+, which argues that virtual objects and experiences are as valuable as real ones
4.d. The current state of virtual spaces (The web2 model)
Questions of responsibility, surveillance, security, privacy, and rights surrounding platform governance
The nature of digital cultures and information cascades
The consolidation of digital real-estate by big tech through acquisitions, inhousing, or lobbying regulation (e.g., preventing / slowing down the transition from third-party to first-party advertising)
The disruption of traditional media
4.e. Modes of existence
Dreams, VR
Cognition and learning in dreamscapes (see Prophetic, working on lucid dreams and neuromodulation) and VR
The nature of environments and world models in dreamscapes (see Exploring dreamscapes)
Phenomenal and Access consciousness in the physical world vs. dreamstates vs. virtual spaces
4.f. Geopolitics:
A “race to AGI” could increase global electricity generation as nations onshore datacenter infrastructure and compute production; such a global race could also shift institutional goals away from green commitments towards simply harnessing energy most efficiently in the interest of national security
A substantial increase in electric cars would require a corresponding increase in lithium, nickel, cobalt, manganese and graphite supply for battery production (Chile has >50% of the world’s lithium reserves, >2/3rd of the world’s global production occurs in China; ~70% of global cobalt production occurs in the DRC)
You made it to the end — thanks for reading!
Sources linked inline; non-exhaustive and evolving (last updated July 31st, 2024); incase of any inaccuracies, please reach out! Feedback is appreciated :)
Amazing insights.
A few questions and ideas
1) What are your views on the current generation of models(o1-Preview & Claude 3.5 sonnet)and the Lovelace test
2) As for the section related to the philosophy of the mind and conciseness. You mention a panpsychism and a few others, What are your Views on Idealism?
3) On the transhumanist vs posthumanist thing. What do think about the idea that transhumanism is likely stepping stone on the way to posthumanism.
4) On the simulation hypothesis I find Thomas Campbell’s ‘My Big TOE’ a very scientific view of the H style simulation. Yet I find the concept of a H - style simulation nested within a S - style simulation fascinating
5) On lucid dreaming. I am curious to know if you have had any personal experience. I personally have succeeded twice at it