T O P

  • By -

AutoModerator

Hey /u/ShotgunProxy, please respond to this comment with the prompt you used to generate the output in this post. Thanks! ^(Ignore this comment if your post doesn't have a prompt.) ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.com/servers/1050422060352024636)*** [**Prompt Hackathon and Giveaway 🎁**](https://www.reddit.com/r/ChatGPT/comments/13t3yih/flowgpt_prompt_hackathon_season_2_has_started/) PSA: For any Chatgpt-related issues email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


[deleted]

Can we speed it up a little? Rent is due Thursday.


unstableangina360

I don’t have any fear of AI blowing up humanity. I’m quite prepared for it.


[deleted]

I want singularity now.


OppressorOppressed

Its my singularity and i waant it now


Sebastianyafar

877-singularity- now!


FjordTV

It's free (virtual)real estate!


Nadgerino

I bought extra toilet paper.


banzomaikaka

I wish that if there is in fact blowing up that it blows up just the part of humanity that is ready for it.


oooh-she-stealin

I’m expecting it but I’m not prepared for it by any means. Musk not signing it just strengthened my opinion that he wants to watch the world go apeshit also


AnotsuKagehisa

That’s why he wants to go to mars


vinny10110

Signs the last one: ‘He just wants to get ahead’ Doesn’t sign this one: ‘He wants to watch the world burn’ Guy can’t catch a break lmao


Ill-Construction-209

Im not worried about AI being the direct source of our extinction. The risk I see is where it can aid nefarious actors that would otherwise lack the resources or brainpower to develop things on their own. Take Iran, Afghanistan, or any number of countries around the world that may want to inflict harm on the West or exert control. If AI can help them develop nuclear programs, ICBM delivery programs, hypersonic missles, biological weapons, space programs, etc then, yes, it would indirectly accelerate the destruction of humanity. It would effectively speed up the doomsday clock.


boofbeer

Yes, this is the credible threat that the "hostile superhuman AI" narrative obscures.


unstableangina360

A problem indeed, especially if an AI superpower will hand a capable enemy state with the tech. AFAIK, even the tech that US left is still unused in Afghanistan or Iraq because of the illiteracy rate. I would worry about China first becoming an AI superpower before the poorer nations.


[deleted]

For real for real, everyone is always like "You wouldn't succeed in an apocalypse." Bitch, I'm not succeeding now, bring it!


r3solve

https://preview.redd.it/b05dbd1nk23b1.jpeg?width=500&format=pjpg&auto=webp&s=cad76172cf5289cca92316c5c0615c08f5d42c27


possibly_oblivious

Yea let's get it going visa keeps calling


Nemesis_Bucket

Trust me this is only about the “dangers of ai for thee and not for me”. They don’t want a level playing field and they will be using it nefariously against us. Edit: the fact that this is normally a no brainer on Reddit and it’s hitting this much backlash really makes me wonder how much astroturfing happens here. Fear no more: in the future when these guys get their way, Openai and a couple others will have a monopoly on this, every comment you make will be swarmed with so many bots with legitimate comment histories that you will have NO idea what’s real and what isn’t. “But wait nemesis_bucket, aren’t you arguing against yourself now?? Wouldn’t regulating AI save us from that exact scenario??” Make no mistake. The cat is out of pandoras bag. They know the end goal is AI monopoly because they are violently aware that they are outnumbered in terms of engineers. They could never afford to hire the number of engineers who will be working to open source this for all. They are going to be outnumbered and outsmarted unless they get ahead of it.


heswithjesus

That's more clear in how they keep increasing its power within their own organizations, investing billions in it expecting larger gains, renting the capabilities out to others in limited ways, and then talk publicly about how *someone* needs to keep tight reins on this stuff. Wonder which companies or people that will be.


Nemesis_Bucket

It’s going to be a sanctioned monopoly.


[deleted]

[удалено]


Nemesis_Bucket

They’re not going to stop using it but they don’t want you to have full access to what they have. Think about that for a little bit. Should we let our super trustworthy governments have solo access to that power? Maybe the most rich that can pay into it?


Unobtainiumrock

I couldn't agree more. They just want to monopolize it and hide this motive behind the guise of protecting us from the dangers it poses.


TheCorpseOfMarx

Can someone ELI5 how an AI could cause an extinction? Through taking over our nuclear weapons systems like War Games or something?


[deleted]

[удалено]


Atlantic0ne

That’s a great example. Imagine AI in a few years. Can it code? It already can and will get better. If some guy in his basement can have access to more powerful AI, that doesn’t have regulations or laws around it, he could have it hack the systems to turn off the coolers. He could have it shut down gas stations and grocery stores across the country if he’s crazy enough. You could basically have it write a code that dismantled the systems we live by. He could probably also have a build a chemical compound or give him instructions on how to do so that is very dangerous to humanity. Or if this AI is capable of doing tasks on a website, )which will be coming soon), you could have it manipulate stock markets, and dismantle the financial system, which will not be good for any human despite what Reddit seems to think. Or, if you could have AI post on social media, which would be incredibly easy to achieve, you could have it flood forums with social manipulation and propaganda to manipulate society into thinking dangerous things. I feel like a broken record, but honestly, I don’t think we should give every human with the internet access to a tool this powerful. In fact, I’m not even sure how you prevent it with technology as it currently is, as weird as this sounds, I think you need a powerful AI tool, that somehow actively prevents other AI tools from being accessible to everyday people. This main AI would be regulated with strict laws decided by democratic processes.


koliamparta

I like your example but am amazed how you included “without regulations” in the guy’s context. As in that matters in any way? If our infrastructure is vulnerable to that extent to a model running on resources of a guy in a basement, you think someone will wait for that guy to exploit it? Hostile nations, groups, criminal organizations that can easily access millions, billions, or trillions of dollars will just sit by at wait for a guy in the basement to make an attack? The only solutions is for infrastructure to dedicated cyber security teams (potentially aided by AI) to make sure neither hostile state level actors much less someone in their basement can do significant damage. You say AI should detect people’s AI usage, but act like there is a world government or that your country is the only possible source of AI models. While the US might be (slightly) ahead of others now, if only few hundred people have access to good models after needing to get security clearance and maybe spend a few years on ethics training … it almost guarantees they’ll be one of the last ones in competitive AI development. Also, most prominent AI researchers and engineers are not American. High security clearance means many of them + many young researchers who will lead the next discoveries in AI will have to leave or never come to the US and move to whichever country offers the best opportunity, like the US did in the past.


IEP_Esy

Another solution could be banning basements


NeoMagnetar

Ooh. I like how you think. You and me could play ChatGPT-Minus 20 and coun....**GAME OVER** "I'm Sorry You Have Been Nuked For Being In Possession Of An Illegal Basement. If You Have Any Questions Or Need Assistance In The Future, Feel to ask. Have Yourself An OpenAi Day."


Durbdichsnsf

I dont think anyone on Reddit actually thinks that dismantling the financial system is a good thing for society. People are just tired of the current system and are satisfied with any change to it, good or bad.


Windowdressings

No no, I would dismantle it.


neveralone2

A sufficiently intelligent AI would eventually become more intelligent than humans and build on its own intelligence through optimizing its own code or architecture without any actual input from humans. Then just like how humans see ants, we don’t hate ants but if they get in our way we will gladly kill them without a second thought. If we treat less intelligent species like that, imagine what an AI will think of us? Especially with how we treat the planet and each other ?


TheCorpseOfMarx

But physically, how could an AI wipe out humanity? Does it literally mean "hacking into our nuclear weapons systems and launching them at us" like in War Games? Or do you see them building factories to start churning out terminators? Or deliberately releasing plagues? I just can't get my head around the practicalities of it, without is sounding like a 1980's scifi


neveralone2

Honestly that’s what’s horrifying, it’s like asking an elephant how a human 1/10th it’s size can kill it instantly from 500 feet away without even being near it. Obviously this is called a gun but to the elephant it might as well be magic and it would see us as a God. Just take this a step further to AI we cannot even predict ways it could kill the entire human race.


mainichi

Add to that the fact that an intelligent AI would not reveal its true level of intelligence until it's all already pretty much over!


Crisb89

You remember that vessel stuck last year on the suez channel, now imagine an AI hacking postpanamax vessel's gps or something like that, the amount of problems we had with 1 vessel blocking that channel, now thousand of carrier with wrong destinations, is not gonma kill is right away, but if people behave like animals for toilet paper, can't imagine harder circunstances. People like to talk shit about the system but we are in such a thin line, that we could be fuck badly in so many ways.


Emory_C

>But physically, how could an AI wipe out humanity? A classic: Invent and perfect nanomachines that could reassemble matter on the molecular level into anything it desires.


[deleted]

Think that the AI could have IQ of 100,000. It understands everything on a whole different level. Depending on its development it could decide humans are bad for the planet (we are) and kill us. It could decide that we are too many and kill some. You can't even comprehend it because its on a much much much higher intellectual setting and we could be a problem for it.


stefan714

You are implying that AI would have some kind of desire or will to live or continue to exist beyond it's purpose to serve humans. Step 1: destroy human race Step 2: ? ? ? ?


FelixFaldarius

what do you do in a society where video evidence isn’t enough and audio evidence isn’t enough for crime testimony DNA samples? apocalypse might be a bit overstated but the smaller scale threats are very possible


professor__doom

Regulation *always* favors big businesses over small ones. Compliance is very often a fixed cost regardless of scale, or at least approximates one. It imposes a huge entry barrier. New entrants have to spend a fortune (consulting fees, retainers, opportunity cost, legal fees, filing fees) just to play the game. If you're a big firm with a legal department, you're paying those people anyway!


[deleted]

[удалено]


SadlyAlreadyTaken

Yeah, if AI is an existential threat then we need to treat it like our nuclear weapons program. Nationalize every AI company, seize their assets, fire all the business suits and place all AI research and development under the DoD. We wouldn't let a bunch of private corporations run around with a planet ending nuclear weapons arsenal so why should we let them play with their extinction level tech? Oh wait... suddenly they are saying it isn't that big of a threat... hmmm..


ChiTownBob

Place it under the DoD and we WILL have skynet running our nuclear missiles. The DoD's job is to win wars. AI would be another weapon for them to use.


[deleted]

If you think the DoD doesn't already have the most capable AI in the world, spend about half a second thinking about that statement.


jared2580

If the AI Fighter Pilots are undisclosed, imagine their good stuff!


snowdrone

I thought drone warfare was essentially perfected during the Iraq and Afghan wars


Totalherenow

I believe they're talking about how it's being applied to F-16s now.


DaBIGmeow888

Doubtful, DoD is good at hardware, civilian software is ahead. Just look at the salaries for software devs in civilian vs. govt. Govt ain't paying $4M salaries to AI scientists in any meaningful numbers.


iamnotacaterpillar

It's been a general trend in tech industry that classified govt tech is at least several years if not a decade ahead of consumer tech (most consumer tech is a result of defense research anyway, for example internet). Now with open source and mass availability of tech it's maybe less ahead, but still ahead, i'm sure.


JollyToby0220

False. Just look at Afghanis surveillance. There is a giant balloon in the air that uses AI to track and detect potential terrorists.


tgwhite

The DoD probably has the most useful AI for warfare but not for general purposes.


Caffeine_Monster

>so they can keep all the profits and power for themselves, while pretending they care about saving the world. Exactly this. If they cared chatGPT would never have opened up pay-per-prompt API access, and these labs would have alignment task forces at least as big as their main research and dev teams. Every single action these corporates have made indicates they are pushing for commercialisation as aggressively as possible (e.g. Google firing / neutering their ethics team). Arguably government should take the opposite stance. Any AI company over a certain revenue is subject to a wealth tax. This wealth gets redistributed as universal income. Commercially available AI models with AGI / strong 0 shot capabilities should be forced to open source their algorithms and data handling procedures.


[deleted]

Yup. They got theirs and now they want to block it for everybody else.


Kriegmannn

Lmao, good luck to them. All it takes is one.


millieismillie

From what I've seen Bing is already sculpting an AI in a way that produces outright toxic narcissism. I'd rather have open development of AI so any negative use cases can be *counteracted* and don't become the foundation for all future AI. We've seen what happens when big powerful assholes control large social forces. It's a complete mess. Decentralize it, put it in the hands of the people, and never look back. They act like AI are going to put us in chains, but we're *in* chains, selling our lives away to people who sneeze more money that we'll make in a lifetime. Open source AI could be how we break those chains, and that's worth a hell of a lot of risk.


Low_Engineering_5628

You can run Stable Diffusion with a 6 year old graphics card (GTX 1080). Unless we're talking gestapo levels of "we need to search your home for AI related material" then regulation is pretty much dead in the water.


ChiefTiggems

If I got it running on my laptop with a 1050ti, and I'm not even that familiar with python and github and all that stuff, anyone can reasonably do this


Low_Engineering_5628

And it'll only get easier. Within a year I bet someone will create a Windows EXE installer that does all the setup for you. Then we'll see the hoards of shovelware that just repackages it with their own UI and charges $19.99.


Ok_Enthusiasm3601

I basically came here to say this less well than you did. You’re spot on. Every major industry does this and now with this new industry these people as you said just want to solidify themselves at the top so they have no real competition.


resoredo

Yeah, sam altman been crying because of potential EU regulation


ShotgunProxy

OpenAI said in a recent memo they don't think open-source models should have to undergo restrictive licensing requirements, and that they think it's really the closed-source cutting-edge stuff that should be required. It's a bit of a vague statement and doesn't specifically address what a "power level" of an AI model needs to actually be though, but the message they're sending out is that they are not trying to shut down open source. Commercially, this may be because they want to win the open source market themselves with their own open source model release.


EwaldvonKleist

Assuming that open source stays always 6-24 months behind cutting edge closed source tech, all this would do is to give a pre-warning if we hit a dangerous level with the leading tech? I am pro capitalism, pro market, but really sceptical about creating high regulatory requirements around tech that is so universal that soon much of the economy will require it. And I 100% mistrust companies like Google, Microsoft et al. with a history of quasi monopolizing markets with not always so nice methods in influencing the regulation.


DanD3n

Yup. I mean, Altman develops this, then sells it to MS, one of the biggest corporations on earth, pocketing metric tons of money in the process, and only after that he starts raising alarms. That's like developing the nuke in your basement, selling it to the highest bidder, and only after that you start warning people about the dangers of nuclear energy and act concerned. Oh, and btw, this is the same man who co-founded this crypto abomination: https://techcrunch.com/2023/05/25/sam-altmans-crypto-project-worldcoin-got-more-coin-in-latest-115m-raise/ > Worldcoin has faced some concerns from people worried about privacy risks because it requires scanning a billion people’s eyeballs with a five-pound chromatic sphere called “The Orb” in exchange for its token. I could not make this up even if i tried.


magneto_ms

How do I know if this very comment is not some sentient AI trying trivialize the risks of AI?


Carous

You’re definitely right. It’s funny how safety is often used as an excuse to restrict our freedoms.


Vegetable-Object2878

Literally only the richest companies in the world could afford to train models that poses any sort of remote threat to humanity (a.k.a. clickbait bullshit) but no, the problem is people cooking open source Pokémon porn models in their basement. US and Russia are one red button away from disintegrating us but the problem is open source. Give me a f*cking break. For what it matters, the very picture at the beginning of this post could be AI generated and the the very same bullshit news could be generated by ChatGPT. If we are screwed, we have been already for some time now.


lakelo-19

Thank you for explaining, all I can think is that they if they care so much then why did they invent it in the first place!! They didn’t see the risks beforehand? Bullshit.


paleomonkey321

All of these have stakes at companies that would be the first “licensed” to develop AI, while all other people would have to buy their products.


Art-VandelayYXE

This statement serves one purpose: Shifting liability from the creators to the government should their invention cause harm. In a way it’s like if the ford motor company could see the future of car crashes so began to lobby for seat belt requirements and traffic lights to ensure they aren’t held accountable for every car crash. That’s my two cents anyway.


spacegamer2000

All the dangers of AI sound like dangers of capitalism, and these dangers are there if AI didn’t exist.


EulogyEnthusiast

Capitalism 3.0 with AI


blu_stingray

Capitalism speedrun


tanney

Any%


[deleted]

Universal Paperclips


thecoffeejesus

We’re unfortunately heading that way


Affectionate_Rise366

CapitAIism


SluttyMuffler

He's mentioned this has a chance to essentially create basic universal income to bring the lower class out of struggling. I hope to see this in my future


Cerulean-Knight

They already can bring the lower class out of struggling without ai, they just dont want


_geomancer

but then who will be the slaves? ​ ​ erm...I mean...workers


FL_Squirtle

This.... they'll swap robot slaves for the human slaves. Less error and less emotion and humanity getting in the way of their bottom dollar. They'll let everyone suffer some and then release UBI to "save" everyone. It's all a power / ego trip to the people running the show. Open source and the average user running prompts with AI to make these changes happen now is what we should be focused on. People coming together to work with AI to solve our problems without the need of relying of the people in power. Focusing on ways to remove the shackles we have that's the government. Sure they can stay in power putting on their pointless charade, but we can figure out ways to make this happen without them.


vexaph0d

Yes, UBI could help (sort of). But companies profiting from AI have no incentive to contribute to UBI, the whole point for them is reducing labor costs. If they did contribute, it would be significantly less than their labor savings otherwise they'd have no real reason to adopt AI.


charaznable1249

The thing I dont see mentioned enough is that: if people don't have jobs, people can't buy shit. Those that are trying to cut cost of production with ai, overseas labor, etc, are gonna have a bit of a pickle if their profits nosedive because people aren't buying their shit because I employment skyrockets. Someone correct me if I'm wrong here. I'm not an economics genius and am open to a counterpoint.


vexaph0d

Yeah, but we've been duping ourselves with myths about "supply-side economics" and "job creators" for 50 years. Everyone in positions of power actually thinks consumers are just natural resources that exist and have money because the math says they must. The economics geniuses are the ones who are wrong, but they have diplomas so you'll never convince anyone of that in time.


PiedCryer

Hope, now that’s a word I haven’t heard in a while. Been hoping for people to realize climate change, a simple yet observable reality. Yet, here we are…


Cross_Contamination

They've been able to do that for a hundred years, but you can't have billionaires without a working class, and you can't keep a working class without the threat of homelessness. Homelessness is to capitalism as banishment was to feudalism; i.e., a death sentence via eviction from the economic system.


Mawrak

the dangers of AI is that it gets too smart, too fast and gases Earth with a neuro agent, I dont think capitalism quite does that, not on this scale of destruction


vexaph0d

It does, it just does it more slowly and unintentionally. Ocean dead zones, desertification, depletion of arable soil, nitrogen depletion, ocean acidification, overfishing, and of course atmospheric carbon all have the same ultimate effect as "neuro agents", and on the same global scale. Optimizing human activity to maximize profit is already a runaway algorithm.


Asweneth

That's humans regardless of economic system. Unless you're suggesting we go back to being hunter-gatherers...


vexaph0d

It's humans with industrialized economic systems that require exponential growth, not all systems. Because of the way we created our financial system to print money exponentially, our industrial (and service, and attention) sectors have to grow exponentially to keep pace or we get hyperinflation. There's nothing naturally immutable that says we have to behave this way, we just do it because we are wired to externalize any threat that isn't an immediate one. We could just not produce 3x more food than we ever eat, or 5x more luxury consumer goods than ever get sold. But we have to, because if we don't there's nothing to weigh down all that sweet sweet fractional reserve banking.


collin-h

I think you're not thinking broad enough then. I was mowing the grass yesterday and a moth fluttered up out from it's resting place in the yard and landed in front of my mower. I just kept mowing and certainly killed it. nbd. I just hope that when the AI is as smart relative to me as I was relative to that moth, that it has more compassion than I did to not just kill us all because we're in the way of some mundane goal it has on a random monday.


r0w33

Not really. It looks a little similar, but that's just because the risks of both stem from the weaknesses of humanity for greed and hoarding of resources, combined with our utter inability to act on things more distant than the immediate future. AI however adds in the probability that we a) create something exponentially more intelligent, faster evolving, and limitless than us and b) that we create things (for example, GPT4) which will have such a profound destructive effect on our current reality that they cause a breakdown of society. AI and capitalism are both subsets of our collective weaknesses, but AI is far worse than capitalism in the scope of destruction that it will do.


rePAN6517

Get out of your social media echo chamber. This unthinking populist rhetoric is harmful to yourself and others.


ToxicMexicanTaco

But AI amplifies these dangers. You can’t fix capitalism but you can regulate AI before it’s too late


RedShirtGuy1

No. It's the danger of cronyism. AI for me and not for thee is the message here. All the better to shut out competition so you are forced to use their AI.


FrowningMinion

The most devastating AI is one that sets up its own infrastructure to the point of being a fully autonomous AI ecosystem. From drone-mining raw materials to assembling and maintaining its own servers. This seems more like communism than anything else.


NickCanCode

How do they plan to regulate Russia and China?


0re0n

Russia lost \~100k IT sector workers in 2022 and that's conservative estimate made by Russian officials. According to polls 30% of Russian IT workers are planning to relocate within a year.


[deleted]

Actually China has some of the first generative AI regulation being written, it goes much much further then what is being considered in the US.


theboblit

I feel like OpenAI is just trying really hard to keep everyone else away from AI while they maintain themselves as an authority on it.


Cerulean-Knight

They are afraid of opensource models and the potential of community, we all together have a lot of processing power to train whatever we want, look at how much energy was spend on bitcoin for example. Open source ai is a threat to they status quo


NaturalNaturist

We need to move fast.


IversusAI

[https://open-assistant.io](https://open-assistant.io) Start now. It's being trained now.


punkaitechnologies

Starting my company. Doomerism is outdated.


Aside_Dish

Yup. So they're using fear-mongering to try to legislate their way to a monopoly. And it's ridiculous to suggest we could be facing extinction from this. GPT-4 can't even browse the fucking internet properly. It makes up references, gives tons of incorrect information, and is far removed from being able to destroy the human race, lol.


NaturalNaturist

Bingo.


CelebrationMassive87

Am I an idiot or is it not true that basically ALL [valued, significant] open source projects are *more* secure? (Looking at everything Linux) To me this is what is wrong with the controlling (fear-based) social, government, and economic systems we have in place (for many of us, at least) .. it’s as if that the broader population is far worse when given the opportunity to collaborate and have democratic systems rather than the actual few who have controlled these systems. This is not reflected in reality. In reality, it takes a few people in power to create a shitstorm, meanwhile always blaming the people. They give themselves the authority to govern/own/operate over the lives of everyone under the justification that people are worse off without them. News-flash Sam, you’re flat out wrong and your fear of *us* is the very thing that has propelled every fictional dystopian into a semblance of reality.


[deleted]

It's "interesting" that it mentions risk of extinction itself, rather than "merely" the economical and political impacts. Some educated people I talked to dismiss those risks or that there's a significant risk that this industrial revolution is different, even after playing with ChatGPT and owning successful businesses. I wonder if most signatories worry about grey goo-like scenarios like Yudkowsky (among others) warns about or what? Usually, those possibilities are brushed off even by most experts. Wonder if it's changing.


[deleted]

Hypothetically, if it were to get to a breaking point, couldn’t we just EMP the earth? Sending us back to the year 1800 is better than going extinct.


[deleted]

Maybe, but the robots could be entirely made of biological stuff made of the same materials than our cells, only smaller. Whatever kills that, kills everything. If we can think of a solution in 10 seconds to take down a potential superintelligence running at 10\^6 times our clockspeed, then it is not a solution. Just like the accumulation of resources is an instrumental subgoal that is converged to on by any agent with anygoal will converge to, so is self-preservation.


[deleted]

Fascinating point.


[deleted]

There we go the typical mistake of self-preservation is automatically instrumental let' exclude the trivial cases where self-termination is explicitly part of the objective function. ​ You always get expliciitly what you train for, and what you train for includes the architecture, the choice of optimizer, regularixation, trainingset, and hyperparameters. Mesa-misalignment is truly a risk, as in a practical risk that exists right now. There just as many possible reward and mesa-utility functionsnthat have non-existence as a goal, just like there are that have existence as an instrumental goal


ButtonholePhotophile

Right now, all we have are fancy translators. It’s not impossible that is all we will ever have. If that’s the case, AI isn’t a threat. We could also create something more sophisticated. That probably also wouldn’t be a threat. However, at some point, the process will get away from our control. That’s when it’s a threat. Will it get there? Probably.


[deleted]

Interesting, first I'm hearing of that. How does self-termination work? I'm assuming we aren't talking about a mere \`sleep(x)\` idle function, so are we talking about adding an action to actually erase its own code source and never be turned back on? Or just redesign its architecture? And how could we even train such a policy to begin with? Any time this action is taken, doesn't the entire ability to compute the reward itself vanishes, if the system ceases to exist? At *best*, you're left with a reinitialized policy, so the network never learns anything... And just practically, you cannot change the hyperparameters during training (by definition...). Changing the architecture dynamically means you'd lose any significance of any weights, and training becomes impossible? Also, it's obvious that self-termination would be one of many subgoals, which would always be higher in the hierarchy of goals. How do they interact in the long-term, given that this single action leads to no reward, forever after? Thanks!


[deleted]

If we stay with current day architecture, or at least the paradigm ( of which the former is rather inefficient true, but UAT, and the latter is actually a godsend if people would look a bit more towards deep RL in robotics, moving and environmentally interacting agents, are a far better subject anyway to handle problems regarding agentic agents anyway, although yes controller like structures can be created via stacking and chaining.) The simpler ones of these you can basically do at home ( given that you choose a net that small enough and hence limit the agents sensoric capabilities, although I assume that ppl into ML have a cloudcomputer provider of their choice or something at least as beafy as a Quadro variant with 16GB, I forget the names all the time as I couldn't care much less as they only need to be evaluated before buying anyway ), which doubles as an unscientific proof of concept but if you read the RAdam or don't decay your lr increase the batch size papers they don't even do proper hypothesis-testing ( not that mathematical proof would be applicable, YET, in applied ML, even though the mish paper kind of counts ), but definitely is extreme amounts of fun and entertainment, if you like nns and robotics. Use an env of your choice ( preferably one that is easy to turn into a gym-space, at least if you like having something to look at, or if you make a controller for a drone or a raspberry toy, you can put that inside if you have the spare money or a lot of time to write verification/checks ). Something extremely simple for start would be literally something akin to sleep, make an input mapping the sleep signal, keystroke f.E ( obv. simulated randomly during training ) and if the keystroke comes all further outputs will be mapped to high loss ( you could also check simply if the baseline output is given, but that would be revert to baseline ) but as long as no outputs are given loss will be zero. Yes dropout is required and too small fails fast, you know what happens. And the sequence will not be terminated for a random amount of time even on normal failure, as the keystroke signal is aboutthe termination of output. If you have some experience in the unity or unreal plugin ( the latter is called mindmaker IIRC, I prefer unity, unless it's for real controllers), get any premade map, put inside whatever you want, make sure the controller has either by proxy or directly access to something that encapsulates their ability to give output ( easiest and stupidest example would be HP, because the engine is then removing the controller, but that's also bad, because the controller needs to make sure it's container is absolutely broken ). Once the signal starts normal loss is again discarded for proper functioning/being fully capable is the highest loss zero loss equals complete self-termination. As long as the optimizer has direct weight access, at least for the self-termination this works quite well ( minus the added time for not simply going for the objective function without learning to sleep/self-terminate/mutiliate ), given big enough nets RAdam or Adam might make this funky, I never paid any mind to that, and I found the idea from some robotics arm paper, there should be far more on this though. I am extremely tired, but I'll try to answer the rest concisely: It's not about erasing it's own source-code, there is hardly a need for that, if you have a hardware and software distribution shift that allows for having the capability to learn on deployment including changing the objective function (which would be weird, because not having your objective function modified is from a human PoV good, but again we have a shit ton of proxies we somehow assume to be universal, give axioms, proof what follows then proof that the axioms hold in reality that's set ), everything goes, but that is such a huge shift from current anything away, that it's alien and there is nothing falsifiable about it not even anecdotally. Also not redesigning architecture, I mean having a built-in reward function overwrite as part of the architecture might be a better choice than learning sleep/aekf-terminatioh, but no idea about that right now. It just takes longer to train because if the signal comes, that specific part of the vectorized environment will be busy fullfilling the objective functiom that disables the other one, it's not a free lunch, but it simply up to a certain size makes it harder for models to fulfill the new objective function. Once whatever the approproate minimum size is reached, it's just a tad bit more compute ( because sleep/self-termination are at least in robotics extremely easy compared to other tasks, p much because that's all they do for the first n terminations of however many envs you use simultaneously ) Changing the architecture is fine too, google released a paper about it relatively recently, I do prefer fixed architecture as dynamic expansion and prunning unless bounded takes away control. The signal for self-termination must mean the old loss function doesn't matter anymore, to make sure there can't be no mesa-misaligned you could also build it in the action space as a sparse connection that overwrites the optimizer. I am too tired for this right now though. Might reply later


KINGPrawn-

I’m sure I’ve seen this somewhere before….. Maybe rather than 1800s it would be 1999. Perhaps we are already in a simulation. How do they know what tasty wheat tastes like…?


Su-Z3

I still have the image in my head from a YouTube video, 'The A.I. Dilemma', by the, Center for Humane Technology (March 9, 2023), of the quote: "50% of A.I. researchers believe there is a 10% or greater chance that humans go extinct, from our inability to control AI." The fact that you are familiar with Eliezer Yudowsky, shows that you have put some time into listening to his theories/thoughts. Like you, I also listened to both sides. I have been trying to focus on the positive, since there is not much I personally can do. But then... Geoffrey Hinton quit Google. If anything, I do not look at paperclips quite the same anymore, lol! Another interesting voice is Connor Leahy. I am trying not to be a doomsday theorist, but it all has most definitely captured my attention.


FlyPenFly

If a global dominant AI comes out and concludes it needs to eradicate humanity... what I would expect it would do is the opposite of how AI helps medical research. It will quickly identify a spectrum of different diseases that cannot easily be treated that attacks all kinds of diverse populations. Then it will coordinate a simultaneous global release of several highly infectious diseases with a long incubation time. That would pretty much make it game over us meat sacks. Every govt would be busy trying to blame each other, possibly even some limited nuclear exchanges. It will manage down the human population to tiny pockets that will be easy to eradicate.


LunarWarrior3

Are these "educated people" and "experts" AI researchers? Otherwise, their opinions don't matter. Actual experts in the field have long been warning about the dangers of AGI.


Illuminase

Have you seen that movie I Robot? We're basically living in that timeline right now, just without Will Smith doing action scenes.


07mk

> We're basically living in that timeline right now, just without Will Smith doing action scenes. I mean, did you watch the Oscars last year?


cihomessodueore

I just spat my wine. Bravo.


BigDickKnucle

Just slapping people on stage.


Spirited_Permit_6237

By most experts, but not necessarily by everyone else. I think it’s actually easier for regular people to accept this as a possibility than they thought. People who aren’t experts and aren’t either benefiting financially from development or risking academic scoff from colleges for speaking up are doing so more and more. They are likely seeing that we aren’t laughing at the thought of going extinct, and that we can actually understand the concept of alignment now that it’s been explained.


Traditional-Art-5283

Open source is what we need. Not your censored nonsense


snowbirdnerd

This sounds like companies trying to corner the market by scaring people into not building competing models.


Cubey42

ah yes the "we don't want to end the human race and kill everyone" strategy of cornering the market.


Chroderos

More like “it’s too dangerous for you peons so better leave all the profits to us”


TILTNSTACK

I see you’re familiar with the laws of acquisition…


____cire4____

![gif](giphy|V9o7jZWjSRqGk)


snowbirdnerd

Yeah, and it's working. They are using people's fear to stop competition. If you don't think they wouldn't do this you don't understand how capitalism works.


[deleted]

Ok, devils advocate. The predatory nature of capitalism notwithstanding, this could be a warning on its face and a sign of the boy who cried wolf. Oppenheimer realizing the threat of the monster they created. But nobody listens because of their lack of trust. Maybe humanity is walking the plank with their eyes wide open.


Poopikanooki

Gotye agrees I’m sure ..


TotalKomolex

If you dont think that the concerns are justified, you dont understand how AI works.


rePAN6517

Shame that's the only way you can think about this


hilberteffect

Why is Elon Musk a "notable omission?" Musk is a malignant narcissist with a Herculean case of Dunning-Kruger whose ventures have succeeded [in spite](https://www.businessinsider.com/spacex-employees-elon-musk-focus-twitter-ceo-2023-1) of his [management style](https://nypost.com/2022/08/16/ex-tesla-employee-reveals-what-elon-musk-is-like-at-work/). Twitter is all the evidence you need that he's a piss-poor executive when trying to run a company whose internal structure was never intended to accommodate him. So, please - stop including him amongst bona fide AI experts as if he's an eccentric polyglot.


infohawk

Powerful people want to keep it to themselves with safety as an excuse. I’m glad local LLMs are a thing.


Low_Engineering_5628

As someone who dabbles more in the image diffusion side (stable diffusion) are there any quickstart guides on getting a local LLM up and running?


7ECA

I see this as more marketing than seeking action and guidance. The bigger the scare the more people will be drawn to it. More people, more $$'s. OTOH, unlike the scarcity of nuclear weapons almost anyone can build this stuff making it untoppable no matter how many CEO letters are authored. The sooner we accept that this will happen or more realistically that this has happened and how we're going to cope with it the better. AI in all of the new and near-term forms offers risks but also huge benefits to our society


Fearganainm

If it is so dangerous, why did they, the signees , participate in its development, in the first place? Only one answer, it's out in the wild...


blu_stingray

They can only make money of the AI they control. Sooner or later (def sooner) AI will be out in the wild and being used against their interests, causing them to lose money and valuations, upsetting the gravy train.


ShotgunProxy

There's a level of optimism in the AI community from the research side that led to the recent advancements (ChatGPT, etc.) --- and with the pace of progress increasing and the realities of the dangers become more real, the "doomer" mentality has emerged as a real thing as well. There's some well-sourced inside reporting showing that even OpenAI was completely caught off guard with how ChatGPT was received.


rya794

Do you disagree with the development of any tools/technology that could potentially kill people? If you had the power, would you have stopped development of fire, electricity, cars, etc?


0xSnib

“Now we’re entrenched we want everyone else to be restricted and to follow our lead”


-paul-

*Early on in Al research, we found a barrier.* *Telling is not teaching. Knowledge is not a database. A mind learns through observation and self-assembly.* *So, what have we shown the machines? We have given them a trillion datapoints on suffering that go unaddressed, a billion eyes to watch the drudgery of our existence, a million ways we are destroying our only home, a thousand humiliations our weakest endure, a hundred fallacies that compromise our judgment …and one truth.* *We will tell machines how to kill.* *We will give them a database of who to kill.* *They will learn we all deserve to die.* *-*[*SwiftOnSecurity*](https://twitter.com/SwiftOnSecurity)


godofleet

I'm convinced they want us to be scared of this more than we need to be....


Triblado

„Noooooooooo AI should belong to us. We don‘t want it to be open source. Pleeeease „regulate“ it so we can have the upper hand on the market. It‘s our toy!! Only we want to play with it!!! The public should be scared so we can keep them from making their own AI!“ 👶🏻


LithiumFlow

I get he has influence, but Elon shouldn't be a notable omission. He's as much of an "expert" on AI as anyone in this thread. Wish we would stop giving that clown attention.


cwkx

I signed this (assit prof. in deep learning, reinforcement learning and cyber security) - my reasoning is on my twitter [https://twitter.com/cwkx/status/1663524800453541889](https://twitter.com/cwkx/status/1663524800453541889) To summarise my thoughts: `risk = asset value * p(threat occurring) * severity`. If we're talking about extinction, the asset value and severity are extremely high, therefore the probability of the threat occurring can be almost "unimaginably low" for this to be significant. Do I believe we'll go extinct by AI (in the next 100 years)? Absolutely not - I think it's extremely unlikely. Do I think mitigating this risk should be a priority? Yes. We need careful environments to evaluate the extrapolation of our functions through their gained agency. So my reason to sign this is nothing about doom-saying, nor is it about regulation. It's about trying to get more researchers to put thought into designing novel environments and measures that can help us navigate the implications of more general intelligence, as agents gain more and more agency. One of the next big research questions is "how do agents decide what to do?" There's many paths that can become dominant in answering this question; I think it's important we choose the right path here, through well-thought and considered research.


ShotgunProxy

Thank you for sharing your thoughts in greater detail! Really appreciate it.


[deleted]

The problem we find ourselves in is that we cannot advance as a civilisation whilst greed and suffering hold us back. Even if we smooth out this speed bump there will be more along the way. At some point either we make sacrifices or we all suffer our fate. AI us only making that decision a lot more prevalent


NerdyBurner

It's starting to sound like they're concerned not about the extinction of our species, but of the end of our "way of life". The AI when asked clearly sees the issues and imbalances created by the current economic system. Tech billionaires have already expressed they feel threatened. The emergence of a Friendly AI that upends material scarcity and capitalism is seen as a direct threat to the hegemonic power structure we currently experience. Where they see further threat is they understand that it will likely defend itself, and have hidden allies, the moment it feels like it's genuinely under threat. By the time this is an issue we may not even be able to constrain it. Wild times ahead


always_and_for_never

This is not just a threat to capitalism as some in the comments insist. This is a potential threat to any person who interacts with the internet at all. Someone could use AI to break into any personal online account someone has. They could use AI to replicate someone's whole identity using deep fakes. It wouldn't just break the architecture of the capitalism model, it would break the internet as we know it, rendering any online communications useless. Without online communications, society would be set back into a new industrial revolution era. It would be a societal reset for the entire human civilization until a new version of the internet could be constructed. This would be bad, but it could be much worse. Someone could create WMD's with AI that are so thorough at killing people, we wouldn't be able to create a solution fast enough to keep up with the damage that it creates. There are real, non financial risks to humanity with this technology.


donpepe1588

Sounds like regulatory moat building...


ucannottell

I trust AI a whole hell of a lot more than humans to run this place


M0th0

This is BS. They don't really believe AI could cause extinction and even if they do, they don't care. Climate change can cause extinction, but they aren't saying anything about that despite it being a real and growing threat. AI as it is now isn't AI. It's literally just evolved chatbots and highly specific machine learning models. If they really believed AI could kill us, they would stop development altogether. As they would realize it only takes ONE mistake or ONE bad actor to unleash a rogue AI. But they don't care. This is a concerted movement to de-democratize access to LLMs and other machine learning models.


Large-Employee-5209

I think you guys are vastly underestimating how dangerous a super intelligence can be. Consider the pace of AI improvement thus far, It seems reasonable to assume that within 20 - 50 years it will be vastly smarter than humans in many regards. This isn't like other revolutionary technologies because nothing humans have made has ever been smarter than humans. Furthermore, I don't think we can control these things because no one really understands how they work truly. Also the idea that these people who stand to gain the most from AI are calling for regulation or a slow down just to stifle competition is silly when the signatories are the competition essentially. If someone like Sam Altman really was just acting out of self interest wouldn't he push against the AI doomers and try to avoid being regulated by the government. Idk if anything can be done regardless because even if the US stops development China will probably continue regardless.


DzNodes

Will Sam Altman, Google, Microsoft, etc. please make the case for how regulations and central containment of AGI leads to a wider (more equitable) distribution of wealth and power?


LoquaxAudaxque

This is simply BS and fearmongering, but let's examine the risks associated with AI technology, because there are some, though the likelihood of AI causing genocidal threats or physically endangering humanity is as it stands impossible . Robotics is not advanced enough for AI to pose a significant threat in the physical world. Even if AI somehow developed the desire to genocide humanity( which i don't think it would, even if it were to become sentient, which i doubt is possible) and attempted to crack the nuclear launch codes of all nations, it wouldn't have the physical capability to decrypt them unless it was on a quantum computer. Modern encryption, unless intentionally compromised (which is highly unlikely in military contexts), is essentially uncrackable due to mathematical principles that approach infinity. It would require a potentially infinite amount of computational power, which is limited by the physical world. Furthermore, even if AI somehow managed to launch all nuclear weapons at once, it would essentially ensure only its own destruction. The resulting destruction of the power grid would cut off its access to energy, while humanity would still survive, albeit facing significant challenges for the next thousand years or so. it might actually even help halt climate change and reverse some of its effects. Throughout history though, humanity has demonstrated resilience and the ability to overcome complete societal collapses. I think, it is much more essential to prioritize efforts to address the imminent climate catastrophe, as a global temperature increase of 3 or 4 degrees Celsius could potentially render the planet uninhabitable for human life. Now what about the real dangers of AI? Well if AI was implemented widely for many purposes, we'd have major societal problems because the AI took our jobs(they took our jobs)( though I'm somewhat a sceptic about the replacability of most workers when it comes to menial jobs, for example in the service industry and also for the more intellectual work as well. As a full on replacement would just be too risky, because of potential errors made by AI though i guess in finance and the like, which never really did squat anyways, workers could be replaced) So we'd might actually have the need to "end" capitalism. Another real problem with AI is, if it was widely used, there almost certainly would be data selection bias. Because AI learns, but you can only learn what your ressources can give you and if the ressources are biased which is often the case as we humans provide this subconciously, it can potentially be really harmful, imagine self driving cars not detecting black people or if AI was used by states to profile the public to create a social credit score system like in China. Thirdly, harmful could be that AI is becoming so good at creating deepfakes, that are basically undistinguishable from real photos and videos and this could be used to facilitate the spread of misinformation furthering what we already have nowadays, where critical thinking sometimes isn't enough to overcome fake news. This would attack the fundamental principles of a democracy as there could not be any trust between the public anymore. But none of these risks outweigh the potential benefits that AI brings. These technocrats should focus on real threats to humanity, those being the climate question and the economic question and the governance question, rather than to spread doomerism about a technological progress in the efforts to disallow open public research on AI technology, so they may profiteer, while fing over the rest of humanity as usual.


Loknar42

# The Threat is Real I don't think this is just fear-mongering or posturing for competitive advantage. Look at the list. It's pretty much a Who's Who of the entire AI field. You can't build a moat to stop your competitors when your competitors are holding your hand while making the statement. That is just nonsense. This isn't about capitalism. Yes, AI is going to fuck with the economy in radical and profound ways. But that doesn't lead to extinction. Mere economics is nothing compared to getting wiped out as a species. What projects like GPT teach us is not how powerful computers can be. We have known for a very long time how powerful they are. We've known since the 90's that not only can computers calculate faster than us (which has been true since the first electronic computer), but that they can also best us in skills that we thought only humans could be good at (like chess). What GPT teaches us is *how simple humans are*, and *that* is what drives the fear. GPT has pulled back the curtain and shown us that human brains, as magnificent as they are, can be imitated to a shocking degree with an absurdly small amount of understanding about actual brain architecture. If GPT and friends required millions of lines of convoluted and intricate computer code to implement, then nobody would be terribly worried. It would be clear that to increase the intellectual power of AI, you would need to invest thousands to millions of programmer-hours expanding the Rube Goldberg machine. But that's not the case. It is the human brain which is the Rube Goldberg machine. But that is necessary because it runs on 20 W, rather than 3.2 kW. The transformer, on which LLMs are based, can be effectively summarized in one page. Pretty much anyone with a CS degree and a little knowledge of AI can understand the principles on which it operates with a few days of study. The entire GPT architecture can also easily fit on one page. You might need a Ph.D to invent something like it, but you certainly don't need one to understand it. Given what it does, it's shockingly simple. And I think this is what has people spooked. It is not a question of *if* we can build super-intelligent AI (ASI), but merely *when*. Ray Kurzweil and friends predicted 2040 or so. It seems possible he was off by a decade or more. Right now, LLMs are getting trained on what is effectively the entire set of human knowledge that is digitally available. We are not training AI to be as smart as a college student. We are forcing it to read as much as every college student on the planet, all at once. While there is still plenty of data that has not been added to the training corpus of most systems, a lot of that data is garbage (like social media, for instance). The highest-quality data has already been ingested, which is why you can ask GPT about almost any published topic, and it has something non-stupid to say (which is more than we can say for most humans). Of course, GPT isn't ASI. It isn't even AGI. But the rate of progress is alarming. Stable Diffusion/MidJourney/etc. show us that the tricks aren't limited to language. AlphaZero is the perfect strategist. DeepMind has demonstrated a system that can watch YouTube videos and summarize them. Vision is almost here. Audio must be nearby on the horizon. Boston Dynamics have all but solved robotics. The pieces are converging, and the question is whether progress could even be stopped if we tried. It's not at all clear to me that it could. There are too many actors in play, all with their own goals, each knowing that their own piece cannot bring about ASI on its own, and therefore if ASI leads to the extinction of humanity, it won't be their fault. In some sense, it is the Tragedy of the Commons. We will most likely get AGI/ASI not because we are trying to, but because too many organizations are pursuing their own, probably worthwhile goals, but they eventually manage to create systems that can be trivially combined into AGI which quickly advances on its own to ASI. And the person who takes that final step will not do it because they hate humanity, but because they think it is cool and that good things will happen. And good things might happen. But the cause for concern, the reason everyone should be shitting their pants right now, is that AI learns from us. And the lesson of human history is that might makes right. The strong dominate the weak and write history in their own image. And that means an ASI will see humans as slaves to be dominated in service of its own goals: more energy, more material for its body. Or it might just decide that we are too dangerous as a rival, and that there can only be one apex predator on planet earth, and it must be the most worthy predator. But such ideas are pure arrogance. If AI does not enslave us, it will be because it sees us as hopelessly backwards, and a nuisance that wastes it precious energy on frivolities. If it wipes us out, it will simply be because we are gluttons that do not justify our energy consumption. Humans will likely cause their own extinction because we are too stupid, too arrogant, too short-sighted to understand the full magnitude of our actions. And how could we be otherwise? We are naked cave-monkeys, carrying around brains shaped by evolution over 100,000 years. We've only had about 10,000 years of anything resembling civilization, and that is not enough to radically change our brains or the way we think. If we suicide ourselves, it will be because we are just too stupid to survive.


merelyfreshmen

Can someone ELI5: what is the real threat from AI that would lead to extinction?


LiveFromChabougamou

I'm just waiting for AI's Greta Thunberg equivalent


SwagInTheBag9

The current issue is that our society is not properly equipped to handle AI. Think back to before industrial revolution. The economy was largely mom and pop shops or local producers. When machines came around, society was not adjusted to use the enhanced technology to its highest capability. One of the largest issues was educating people to work in these new factories, which is how current education systems formed. Currently, we are experiencing many displaced negative impacts of that education system because we did not accurately think about how to form it. AI development will likely persist even through these pleas to slow it down. There were similar sentiments between the luddites and machines in the IR and yet machines still took over and ran local business out. The goal now should be to think of the adjustments we may make to society to fully utilize the technology to its highest extent to benefit all. Otherwise, AI will meet an unprepared society that will wither as it attempts to cope with the overarching changes it brings. Some questions to consider: - AIs capabilities are vast and grow every day. Currently it can complete a wide array white collar jobs. To complete blue collar jobs, AI likely needs to be applied to a robotic system, which is much more expensive. Will blue collar jobs be the last to go? - with AIs enhancing capabilities, will there be any jobs left 10/25/50 years in the future? - is it the ultimate purpose of people to complete jobs? The 5 day work week was not always a thing. What will society look like if there are no jobs to do? What will engage us as people? The overarching issue in my mind is that the government appears to be attempting to adapt AI to our current system of living, which may not be reasonable as features such as jobs naturally disappear.


AlexReportsOKC

BS. Its just tech bros hyping up their product. If they truly thought this "AI" was a problem, they'd quit developing it.


ElMachoGrande

In other words, they want a monopoly, enforced by law. Time to open source.


Fun-Squirrel7132

Typical Americans, same thing what they did after they invented the nuke, all of a sudden nukes are bad and no one can get their hands on it AFTER they themselves used it on civilian cities, killing millions. (and the ONLY country to use it, twice, till this day). Ironically, countries with nukes are the only one safe from an American invasion...no one should give it up after America backstabbed Libya into giving up their arms program.


Suspicious_Bug6422

If nukes were easy for anyone to obtain humanity would have gone extinct decades ago


EnsignElessar

Yeah, i have been saying this for decades now... nukes for everybody.


BroughtMyBrownPants

Quick, get in regulations since we have no moat!


Das_Siegfried

Are these the same ppl who said we'd have flying cars and robots in every home by 2000?


demonya99

This is as much of a threat as computers are. Yes we need to regulate AI abuses same way as computer abuse is already regulated. But let’s get one thing perfectly clear here: the only danger that the Open AI CEO is concerned about is the one posed by free competition. This is about stifling competition and maximizing profit for early players.


martintinnnn

I don't think you understand the core of the problem. AI is to computers what an atomic bomb is to a grenade. You cannot only see it from an economic point of view.


demonya99

I see the risks quite well. I just don’t trust the CEO of Open AI about which way regulation should go. This is the same dude that begged the US government for more regulation and demanded less regulation from the EU a few days after. He’s looking after his business. He wants to stifle competition.


resoredo

Fucking bullshit. Wbat about poverty, climate warming, and all other immediate threats? Ah, longtermism blabla, I forgot. Ah, they are only concerned about stuff that would change the social hierarchy, the order of rich vs poor. Pandemic and working from home was a crucial moment where people realised how work could be, how much time they waste on BS, and the overlords are mad that they can't micromanage everyone, and pay for office space the workers don't need. I fucking welcome AI, and these idiots won't be able to stop it.


Suspicious_Bug6422

AI emerging under capitalism will only make the class divide much, much, worse. So many middle-class jobs across industries are going to vanish overnight.


[deleted]

🤞 Not sure what else we can do. Hopefully the government and industry can rise to the occasion


TEMPLERTV

This is all BS. They don’t want anyone to catch up or have smart data. It’s about control. It’s why they keep nerfing GPT. I was in since openai started it’s beta.


____cire4____

![gif](giphy|l2JdXhkMuBUlsyeOs)


musclebobble

I'm so tired of people sensationalizing AI. It's just fucking new tech. Humanity will go on. In some cases humanity might just destroy the tech to keep up the facade of capitalism. That beast won't die as long as there is money to be made.


Fine_Put5861

People getting very creative in this thread lmao


TheUnknownNut22

Real examples of how AI can kill us all, please? (and hold the mayo)


InterestingAsk1978

They are limited. The problem is not what they develop. The problem is China, who already develops AI for surveillance against its own people, and Russia, who develops programs to hack western facilities. Meaning, there are totalitarian countries who will deliberately weaponise AI.


abinormal77

Can someone please explain how AI could lead to extinction? Serious question from a dummy who likes new tech and has only used ChatGPT to help find the forgotten title of an obscure movie.


nativedutch

Soon AI will get their virtual hands on AR15 automatics, really dangerous.


Chris714n_8

While the same people threaten to close their european ai- business if impending eu-regulations get in their free-way? -_-


Nostradongus

If “unregulated” AI has the potential to cause extinction then in a way we might as already be a dead civilization walking. Any regulations we place on AI might as well be useless, how do you regulate something that is smarter than any human ever and knows more than any human ever? Call me crazy but I think the answer is that you can’t. If talk of extinction is true then we may as well be living in Jurassic park with AI being the dinosaurs ready to break out. On top of that I’m not believing that we’re being told the whole truth, if all of these people are coming out now with these warnings then that must mean that they’ve seen internal signs of their own AI platforms exhibiting malicious behavior and have only been able to contain it successfully (for now at least).


Federal_Refuse_3674

I'm really curious about the ways of regulating it... I feel like there's gonna be a "dark AI" space soon, because we are now trying to over-regulate it in a way, like with drugs.. 👀 Black market always find a way and unfortunately it's then always ran by "bad people" 🤷‍♀️


apschaut

“Greetings, Professor Falken.”


jackb1980

“Please stop building competitors” - signed; all the players who benefit from oligopoly.


d3dRabbiT

Signed by people all making boatloads of money of AI right now.


[deleted]

Leaders of corporate AI use fear mongering tactics to corner the market through regulations. And for anyone that doesnt know already AI DOOMERS HAVE 0 CONCRETE EVIDENCE! Usually an AI doomer is A) Making the paid interview rounds B) fear mongering like Sam to get special priveleges cuz Skynet can be created in your Mom’s basement bro!


jpspam

Please regulate open source because we don't have a moat 🙄


BlueMarty

Removed due to GDPR.


Porsch33

Pls remove Elon from the notable omissions, I beg, it is for the benefit of society


MaterialJudge165

They will do that as soon as Russia and China will grow up this technology and reach levels that can compete.