The paradox about life in the Sea Org (an organization with a billion year contract) is how much they care about the here and now. They focus on the shortest possible timelines, to keep their low-level staff working around the clock. The sci-fi stops as soon as the Sea Org oath has been taken.
In fact they want to get rid of the very concept of evolution but enter a timeless state of production, where the money keeps flowing uplines. Every day will be the same forever, like in the Tom Cruise film "Edge of Tomorrow".
For Sea Org staff members progress is an illusion, at least at a personal level.
My own life has also been rather static, but now it seems there is a small chance that the real world may start changing in ways that would even affect life in Big Blue and Saint Hill . . .
I have always thought that software is fantastically weak and infuriatingly defective, in fact criminally so. Software has been made as evil as they can get away with. The way that things don't work is just part of how the world is endlessly malevolent in all directions.
That has made me more interested in ongoing AI research trends. Perhaps smarter computers could design better interfaces than human programmers are willing to do?
Well, that depends on whether these computers will be less or more evil than the human programmers who are tormenting us today . . .
The basic truth about any complex system is that it's always more complex than it seems. All projects take longer than planned (and not just when David Miscavige is involved). However, if you repeat a process enough, it can eventually be done faster and more reliably.
Multicellular life took eons to evolve. Animals took millions of centuries to develop intelligence. Primitive humans were stuck in the stone age for thousands of centuries.
Right now, society is about as dumb and inefficient as it can get away with. The most powerful force in the world is the implacable consensus that exists everywhere. There are too few geniuses to overcome the sometimes monstrously deliberate inefficiencies of life.
For those reasons, it seems probable that developing something as complex as Artificial Superintelligence will take several decades at least, and can only be done with a great deal of effort.
By this I mean that completely unexpected delays will arise that will keep slowing things down. Yet it's the only thing that might possibly save us, the closest thing to a magic genie.
All the posts on the LessWrong.com website make a powerful case that when the first AI does develop superintelligence, it will likely not be "well rounded", but hyper-focused on some inadequately defined goal. Having less general intelligence will not make it less dangerous. The threat range may be "smaller" but no less deadly.
What is the simplest way a brute-force AI could run amuck? All it would take is one super clever idea, like the easiest self-replicating nanobot, DNA rewriting meta-viruses, or even social memes to manipulate personalities. We vastly underestimate how badly things could go wrong. Just dropping a test tube with bat guano can crash the world economy for three years, and cause me personally to lose $100,000 in life savings.
Open-ended software entities running on sufficiently powerful hardware are likely to be controlled by nations or large corporations. Due to their extreme cost and thanks to popular fears, it may be possible to impose worldwide restrictions on such projects. For example, they could only be allowed to run on a shared global network, with many "kill switches".
The real danger comes from smaller AI projects using cobbled together supercomputers or rented CPU farms. These will also arrive sooner. No one is monitoring the research in places like North Korea, or even the Flag Land Base. (My opinion is the world is full of evil people, but in self-righteous ways. The world is evil in ways that most people refuse to talk or think about.)
Any efforts to anticipate how these projects might go wrong would generate new dangerous ideas themselves. There are a million ways the biosphere could be poisoned or society disrupted (even THIS extremely obscure blog post could be dangerous, though the expected costs from increased human obliteration risks could hardly be more than a few cents).
For that reason smaller AI projects should also have mandatory oversight (without excessive costs being imposed) or else they shouldn't be allowed to benefit from any discoveries they make. Copyright and patents only work if most countries enforce them, so only a few countries would need to pass pro-alignment legislation to reduce the profit motive behind unmonitored research.
For AI to be controlled, the whole world would have to be open to full inspection for global safety risks, including areas that seemingly have nothing to do with AI. (I wrote an incredibly obscure novel about such inspectors. Also I've been told the female characters especially have been written in a very unrealistic way, so it may not be too readable.) Global inspection would only be practical if all needlessly intrusive laws (like for non-violent crimes) would not be prosecuted as a result of the inspection process.
Again, the principle of mediocrity applies. There is a likely limit to how much damage early AI projects can do, unless we get very unlucky.
Perhaps we will be protected from an all-encompassing Singularity takeover by several pre-Singularity crises that help us prepare better. Of course millions of people would have to die first. I tend to think that is how it will go.
I also want to repeat my unpopular proposal not to rely on developing super-AI tools to solve the problem of human mortality for us, but to focus on that problem directly. (That includes minor philosophical questions like what should be the highest ethical principles across all reality.)
Anyway, the point of this post is very simple: We don't have to worry about the threat of artificially intelligent entities destroying humanity, not the least bit.
Long before then, a vast array of semi-intelligent software will be able to obliterate the world just as thoroughly.
Should we manage to overcome that threat, the things we learn then will prepare us for a full AI supermind far better than anything we can imagine now.
IF this all goes right, then something like the Singularity might actually happen. It could lead to as many different outcomes as there are individuals.
Sea Org members might then find themselves locked in their current position, doing the same job twelve or fourteen hours a day, for six days a week. Not just for a billion years, but effectively forever.
Monday, March 13, 2023
Subscribe to: Post Comments (Atom)
The Consistency Mystery
Sea Org members don't have time to think about such things, but there is a strange bias that affects every human even if they believe th...
Or just short anecdotes to inform those who want to join later. Life in the Sea Org has been very odd these past seven months, yet curiously...
I'm thinking of helping to organize a reunion of former Sea Org members, where they could catch up, reminisce, and discuss the current s...
In Florida, a new corporation has been created under the name "Sea Organization Corporation" (Feb 12, 2019) as a not for profit co...
Post a Comment