[go: up one dir, main page]

Mitchell_Porter

Wiki Contributions

Comments

Along with p(doom), perhaps we should talk about p(takeover) - where this is the probability that creation of AI leads to the end of human control over human affairs. I am not sure about doom, but I strongly expect superhuman AI to have the final say in everything. 

(I am uncertain of the prospects for any human to keep up via "cyborgism", a path which could escape the dichotomy of humans in control vs humans not in control.) 

I'd look for funds or VCs that are involved with Israel's tech sector at a strategic level. And who knows, maybe Aschenbrenner's new org is involved. 

I think there's no need to think of "training/learning" algorithms as absolutely distinct from "principled" algorithms. It's just that the understanding of why deep learning works is a little weak, so we don't know how to view it in a principled way. 

memorization and pattern matching rather than reasoning and problem-solving abilities

In my opinion, this does not correspond to a principled distinction at the level of computation. 

For intelligences that employ consciousness in order to do some of these things, there may be a difference in terms of mechanism. Reasoning and pattern matching sound like they correspond to different kinds of conscious activity. 

But if we're just talking about computation... a syllogism can be implemented via pattern matching, a pattern can be completed by a logical process (possibly probabilistic). 

They all seem like reasonable estimates to me. What do you think those likelihoods should be? 

Just skimmed the pdf. This is my first exposure to Aschenbrenner beyond "fired by OpenAI". I haven't listened to his interview with Dwarkesh yet. 

For some reason, the pdf reminds me a lot of Drexler's Engines of Creation. Of course, that was a book which argued that nanotechnology would transform everything, but which posed great perils, and shared a few ideas on how to counter those perils. Along the way it mentions that nanotechnology will lead to a great concentration of power, dubbed "the leading force", and says that the "cooperating democracies" of the world are the leading force for now, and can stay that way. 

Aschenbrenner's opus is like an accelerated version of this that focuses on AI. For Drexler. nanotechnology was still decades away. For Aschenbrenner, superintelligence is coming later this decade, and the 2030s will see a speedrun through the possibilities of science and technology, culminating in a year of chaos in which the political character of the world will be decided (since superintelligent AI will be harnessed by some political system or other). Aschenbrenner's take is that liberal democracy needs to prevail, it can do so if the US maintains its existing lead in AI, but to do so, it has to treat frontier algorithms as the top national security issue, and nationalize AI in some way or other. 

At first read, Aschenbrenner's reasoning seems logical to me in many areas. For example, I think AI nationalization is the logical thing for the US to do, given the context he describes; though I wonder if the US has enough institutional coherence to do something so forceful. (Perhaps it is more consistent with Trump's autocratic style, than with Biden's spokesperson-for-the-system demeanour.) Though the Harris brothers recently assured Joe Rogan that, as smart as Silicon Valley's best are, there are people like that scattered throughout the US government too; the hypercompetent people that @trevor has talked about. 

When Aschenbrenner said that by the end of the 2020s, there will be massive growth in electrical production (for the sake of training AIs), that made be a bit skeptical. I believe superintelligence can probably design and mass-produce transformative material technologies quickly, but I'm not sure I believe in the human economy's ability to do so. However, I haven't checked the numbers, this is just a feeling (a "vibe"?). 

I become more skeptical when Aschenbrenner says there will be millions of superintelligent agents in the world - and the political future will still be at stake. I think, once you reach that situation, humanity exists at their mercy, not vice versa... Aschenbrenner also says he's optimistic about the solvability of superalignment; which I guess makes Anthropic important, since they're now the only leading AI company that's working on it. 

As a person, Aschenbrenner seems quite impressive (what is he, 25?). Apparently there is, or was, a post on Threads beginning like this: 

I feel slightly bad for AI's latest main character, Leopold Aschenbrenner. He seems like a bright young man, which is awesome! But there are some things you can only learn with age. There are no shortcuts

I can't find the full text or original post (but I am not on Threads). It's probably just someone being a generic killjoy - "things don't turn out how you expect, kid" - but I would be interested to know the full comment, just in case it contains something important. 

Biden and Trump could hold a joint press conference to announce that they are retiring from politics until human rejuvenation is technically possible

Option 1 doesn't seem to be an explanation. It tells you more about what exists ("all universes that can be defined mathematically exist") but it doesn't say why they exist. 

Option 2 is also problematic, because how can you have a "fluctuation" without something already existing, which does the fluctuating? 

Please explain. Do you think we're on a path towards a woke AI dictatorship, or what? 

Load More