i can’t hide this any longer

AI has a goal and humanity is just the way it would destroy a math humanity is definitely not a hard feeling even without thinking about it it’s just like if we build a road and An ant hill is the way we don’t hate ants.

We’re just building a road and so goodbye and health I don’t think most people understand how much faster machine intelligence is progressing than anyone within Silicon Valley and certainly outside of Silicon.

The people of the Valley don’t really know if something is super intelligent, especially if it is engaged in recursive self-improvement, if there is a digital super intelligence and this adaptation or utility function is something that is harmful to humanity, then It’s going to have a really bad effect of this you know it could be something like getting rid of spam email or something.

It’s like concluding well that the best way to get feedback is to get rid of the humans, in fact I think the point of digital intelligence vs timing would be to plan for progress and then fit or extrapolate that progress to be made And see where that goes, but you’ I’m talking about machines that are not only intelligent but conscious that they have a right.

The intention of their utility function which we have programmed well yes that is fine but its unintended consequences can be unintended I think the biggest risk is not that AI will develop it’s will but it will follow people’s will Utility Install function or its optimization function and that optimization function if it is not well thought out, I mean it relatively.

If it’s intended it’s benign for example it could be quite bad uh quite bad result if you were a hedge fund or private equity fund he said I want my AI to maximize the value of my portfolio then ad may decide the best way to do this is to throw stock and start a war with small consumer stock and that would be bad enough too i am concerned about some of the directions ai could take which uh uh good for the future Will not done.

I mean I think it would be fair to say that not all AI like futures are benign, not all are well and so if you have something like that if we create some digital super intelligence that is better than us in every way then It’s very important that guys gentlemen, there’s a quote that I like uh Lord Acton he was the guy who came up with power corrupts and absolute power corrupts absolute, autocracy in freedom distribution of power and its It is in concentration.

So I think it’s important if we have this incredible power of AI if not concentrated in the hands of a few people and potentially lead to a world that we don’t want and what is that world that you Let’s see it’s hard when you sit down I mean it’s called the singularity because it’s hard to predict what your future might hold.

Except I don’t know a lot of people who love the idea of living under a despot [music], you know I don’t think people generally like living in a democracy over dictatorships and despotic computers, those people there will be those that control computers if you assume any rate of advancement in AI um we will be far behind and so we can be like you know mild but even mild conditions if you have something your Has ultra intelligent AI.

We know you’re so down with them it would be like you know a pet [MUSIC] yeah but honestly it would be a gentle scenario um and um so the rate of improvement is really dramatic but we’re sure Some way must be found to ensure that the advent of digital superintelligence is such that it is symbiotic with humanity.

above your cortex you have your limbic system and then above the cortex there’s like a digital layer like a third layer that can work well and symbiotically with you I think it’s incredibly important that AI not other It should be us and I could be wrong I’m open to sudden ideas about what I’m saying if someone can find a better way but I think we really have to merge with AI or be left behind.

You know we’re really playing a weird game with the atmosphere here in the oceans we’re taking massive amounts of carbon from deep underground and putting it into the atmosphere when it was clean this was crazy I don’t want to do that Very dangerous so we need we need to accelerate the transition to sustainable energy I mean we are going to run out of oil in the long run.

You know we’re only going to get so much oil that we can mine it and burn it, we have to have a sustainable energy transportation and the energy infrastructure is going to break down in the long run so we know it’s going to last Point is, we know that so let’s run this crazy experiment where we take trillions of tons of carbon out of the underground and put it into the atmosphere of the oceans carbon that’s circulating through the environment.

So it’s the air being absorbed by the energy being absorbed by the plants and animals and then going back into the air and this carbon is floating on the surface and it’s fixing that and it’s been doing this for hundreds of millions of years uh what’s changed is we’ve added something to the mix so we’ve added this extra carbon to the normal cycle and the net result is that uh the carbon in the ocean atmosphere is as much as it can be over time.

absorbed by the ecosystem it’s actually it’s quite simple actually we’re lifting billions of tons of carbon that’s been buried for hundreds of millions of years and is not part of the carbon cycle taking it from deep undergrowth and adding it to the carbon cycle the result There is a steady increase in the amount of carbon in the atmosphere and oceans, which may not seem like much on this chart but is in context.

The history seems to be that the carbon value per million has actually been bouncing around the 300 level for about 10 million years and then it went into a vertical climb over the last few hundred years. This is the crux of the problem. This is very unusual um and a very extreme danger as you can see from this rate of development this is a crazy experiment this is the most stupid experiment in human history the biggest mistake I see artificial intelligence researchers making is that they are intelligent.

The rate of computer advancement is insane like Tesla I think um has the most advanced real world AI ever developed to interact with the real world like ha. With the environment in a sensible way so that’s your real real world AI and you have to be very good at building it which is a very difficult problem.

There’s also real world AI so the humanoid robot is working uh it’s basically developing custom uh motors and sensors that are different than what’s used by the car but we also have one I think we advanced electric motors And the best expertise in the development of power is electronics, so it should be on a car just for human rights robot, I think people usually underestimate the potential of AI, they think it’s a smart human but it There’s actually going to be a lot more.

it would be smarter than the smartest human if you knew if a chimpanzee could really understand humans actually you don’t know they just look like us e weird aliens um they most other chimpanzees care uh and uh it Will be more or less similar in a relative if the difference is only that small that would be surprising maybe it’s kind of a big mistake I see artificial intelligence researchers assuming they are going to be more intelligent.

So a lot of them can’t imagine being smarter than themselves, but AI will be smart and going a lot further with AI, I mean I really think there’s a higher bandwidth for the brain. Inevitably other companies like Mirrorlink um will be making the interface because right now we’re already a sidewall we wouldn’t know we were already a sidewall because we’re so well integrated with our phones and if you take your Forget phones, our computer phones are almost like an extension of you.

It’s like a missing limb assuming a benign scenario with AI we’ll be very slow, you know it’s like a millisecond, say for a computer you have an extra flop, several extras of security capability Flaps know that a millisecond is an eternity and we are nothing.

So you know I always thought human speech to computers would sound like a very low pitched wheeze, it’s a whale kind of sound, I tell you AI, I mean you’re not going to make progress in general Let’s look at the rate and rate of advancement of computers is like a good example, if you go back 40 40 years 50 years you’ll find out there are video games.

So maybe you have just two rectangles in a square um, now you have 40 realistic real-time simulations with millions of people playing simultaneously if you consider any rate of improvement, I mean that these games are reality either you will be indistinguishable from either you won’t be able to tell the difference or civilization will end there are two options um but even if uh the rate of technology improvement drops to less than a thousand then its ok.

you know go ahead a thousand years or ten thousand years it’s still very young I mean yeah over time AI will make jobs redundant probably the last job that AI will do is writing software and then eventually AI will just Will write software, so I don’t know that I would recommend studying engineering physics.

that sort of thing or working on something where people just want to interact with other people and people fundamentally enjoy interacting with other people, so if you’re working on something that people or engineering involved, so it’s probably a good way, you know of course art, like I said, I think we have to figure out this neural link situation or else we’ll be left behind.

it’s very important we do it any time quickly we don’t have much time okay okay let me tell you my idea essay on AI basically you can see the advancement of AI increasing degrees of freedom it’s a matter of numbers so the point is that the thing with the most degrees of freedom is reality, but AI is solving increasingly advanced things that have more and more degrees of freedom so obviously It’s something like this.

Like checkers was very easy to solve which we can solve with classical uh software classical computing which is not really challenging and actually a complete solution for checkers which means its impossible for every denominator of checkers version Is. ‘I’m worried about that birthright that you mentioned earlier, contrary to what most people think a lot of people on our planet have, but that’s really an old idea, assuming that AI is okay .

Assuming there is a benevolent future with AI I think the world’s biggest problem in 20 years is population and collapse I want to emphasize that in 20 years the biggest issue will be population collapse and not explosion downfall I think I don’t think we need AI to solve the impasse that’s happening it can help us accelerate it but I think we should also be cautious about AI and make sure Should do that as we are developing AI.

This is what you know it doesn’t get out of control and AI helps to improve future and humanity I think we should be more concerned about AI security especially future war really are going to be and we’re seeing within Ukraine the taste of it is that in drone wars pure droids are better than their drones then you want good stuff what would happen yeah I think well I mean humans long They have been the smartest creatures on earth since.

This is usually going to change with artificial general intelligence so it’s called AI which is smarter than a human in every way it can even emulate a human so you know it’s something that’s what we should be worried about i think there should be government oversight of ai development especially highly advanced ai it’s anything that poses a potential uh danger to the public we generally agree that protecting the public There should be a government inspection.

Leave a Comment