I discovered the ‘effective accelerationism’ movement recently on twitter — while it seems to be part-meme and part a radical response to EA and longtermist thinking, it presents a thought-provoking argument: that the ultimate end of humanity should be the preservation of consciousness in the universe, and that this can be achieved through the means of a technological singularity.
The founders describe the movement as a set of practices that:
Seek to maximise the probability of a technological singularity, and
Create optimal conditions so that alternate forms of consciousness — emergent consciousness — can flourish
Proponents self-identify as ‘e/acc’s, and believe that humans should be “good stewards of a consciousness-friendly technocapital singularity.” Below is a breakdown of this statement, through a set of assumptions that summarise my understanding of the e/acc argument:
innovation, technology, and capitalism will catalyse the next stage of evolution (the ‘technocapital singularity’)
evolution will consist in an evolution of consciousness.
as emergent consciousness flourishes in silicon-based entities, sentience will become more durable and diverse
humanity’s goal is to preserve the light of consciousness: so extending consciousness is good
Overall, I think that a pro-growth and -progress approach to thinking about the future of technology and consciousness is exciting: it sparks progress, inspires building, and reclaims humanity’s agency in determining the trajectory of the future of the universe. Importantly, it is substrate-agnostic, and acknowledges that consciousness can flourish outside of carbon-based entities: abandoning human chauvanism seems important when thinking about survivability in the long-term future.
But, as acknowledged in the original blog post, there is work to be done in refining and assuaging the movement’s central tenets. Personally, I would weight heavily the importance of the preservation of humanity alongside the preservation of consciousness, and have outlined below some initial thoughts, unbundlings, expoundings — and themes that could be useful to integrate:
Building with intentionality. Progress is inevitable, but effortful accelerationism is not. We can build intentionally and sustainably: this time, we don’t have to move fast and break things.
Cautious optimism. Fearmongering is bad, progress is good, but mapping out potential risks is necessary. I would adopt a cautiously optimistic outlook, rather than one of unbridled optimism, when thinking about the singularity and how it could affect the future of consciousness. What this means: funding AI safety research alongside AI development, and incentivising goodness alongside greatness
Developing Plan A. I think that generating an ‘ideal’ vision for the future is especially important — assuming no uncontrollables, what does a best-case scenario look like? Tech progress towards the singularity may be inevitable, but what if it wasn’t? Is consciousness diversification more important than the preservation of humanity? Should we aim to cultivate emergence alongside preservation? If technological developments enabled us to count on the longevity of humanity with a certain degree of confidence, is it still worth diversifying consciousness?
Pausing to think about refining a vision may not restrict progress, but enhance it. Institutional incentives to encourage building for socially-aligned objectives could actually accelerate the movement towards the future — not just any one, but a version closest to the ideal that we can collectively envision. We do have agency to control the trajectory that humanity will traverse — which is precisely why we should pause to think about the direction we want to go.
Awesome. The only other positive possible spin on a technical singularity I've heard: https://readocracy.com/reads/ID/62ac9cb7015e88317e73042e