by James Corbett
TheInternationalForecaster.com
July 28, 2015
Well, you can’t blame them for trying, can you?
Earlier today the grandiloquently named Future of Life Institute (FLI) announced an open letter on the subject of “autonomous weapons.” In case you’re not keeping up with artificial intelligence research, that means weapons that seek and engage targets all by themselves. While this sounds fanciful to the uninformed, it is in fact a dystopian nightmare that, thanks to startling innovations in robotics and artificial intelligence by various DARPA-connected research projects, is fast becoming a reality. Heck, people are already customizing their own multirotor drones to fire handguns; just slap some AI on that trend and call it Skynet.
Indeed, as anyone who has seen Robocop, Terminator, Bladerunner or a billion other sci-fi fantasies will know, gun-wielding, self-directed robots are not to be hailed as just another rung on the ladder of technical progress. But for those who are still confused on this matter, the FLI open letter helpfully elaborates: “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” In other words, instead of “autonomous weapons” we might get the point across more clearly if we just call them what they are: soulless killing machines. (But then we might risk confusing them with the psychopaths at the RAND Corporation or the psychopaths on the Joint Chiefs of Staff or the psychopaths in the CIA or the psychopaths in the White House . . .)
In order to confront this pending apocalypse, the fearless men and women at the FLI have bravely stepped up to the plate and . . . written a polite letter to ask governments to think twice before developing these really effective, well-nigh unstoppable super weapons (pretty please). As I say, you can’t blame them for trying, can you?
Well, yes. Actually you can. Not only is the letter a futile attempt to stop the psychopaths in charge from developing a better killing implement, it is a deliberate whitewashing of the problem.
According to FLI, the idea isn’t scary in and of itself, it isn’t scary because of the documented history of the warmongering politicians in the US and the other NATO countries, it isn’t scary because governments murdering their own citizens was the leading cause of unnatural death in the 20th century. No, it’s scary because “It will only be a matter of time until [autonomous weapons] appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” If you thought the hysteria over Iran’s nuclear non-weapons program was off the charts, you ain’t seen nothing yet. Just wait till the neo-neocons get to claim that Assad or Putin or the enemy of the week is developing autonomous weapons!
In fact, the FLI doesn’t want to stop the deployment of AI on the battlefield at all. Quite the contrary. “There are many ways in which AI can make battlefields safer for humans,” the letter says, before adding that “AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” In fact, they’ve helpfully drafted a list of research priorities for study into the field of AI on the assumption that AI will be woven into the fabric of our society in the near future, from driverless cars and robots in the workforce to, yes, autonomous weapons.
So who is FLI and who signed this open letter? Oh, just Stephen Hawking, Elon Musk, Nick Bostrom and a host of Silicon Valley royalty and academic bigwigs. Naturally the letter is being plastered all over the media this week in what seems suspiciously like an advertising campaign for the machine takeover, with Bill Gates and Stephen Hawking and Elon Musk having already broached the subject in the past year, as well as the Channel Four drama Humans and a whole host of other cultural programming coming along to subtly indoctrinate us that a robot-dominated future is an inevitability. This includes extensive coverage of the topic in the MSM, including copious reports—in outlets like The Guardian—telling us how AI is going to merge with the “Internet of Things.” But don’t worry; it’s mostly harmless.
. . . or so they want us to believe. Of course what they don’t want to talk about in great detail is the nightmare vision of the technocractic agenda that these technologies (or their forerunners) are enabling and the transhumanist nightmare that this is ultimately leading us toward. That conversation is reserved for proponents of the Singularity, like Ray Kurzweil, and any attempts to point out the obvious problems are pooh-poohed as “conspiracy theory.”
Thus we have suspect organizations like the “Future of Life Institute” trying to steer the conversation on AI into how we can safely integrate these potentially killer robots into our future society even as the Hollywood programmers go overboard in steeping us in the agenda. Meanwhile, those of us in the reality-based community get to watch this grand uncontrolled experiment with our world’s future unfold like a genetic engineering experiment and a geoengineering experiment.
What can be done about this AI / transhumanist / technocratic agenda? Can it be derailed? Contained? Stopped altogether? How? Corbett Report members are invited to log in and leave their thoughts in the comment section below.
James,
As interesting as the question “what can be done” about the AI agenda might be, it could easily be directed at any number of other collectivist agenda as well, many of which may be equally or even more devastating. As I have mulled over the appropriate responses to their enslavement process for many years now, it has become apparent that there are limited choices. Number one, I believe that the only possible response with any measure of success must be a personal nullification of their aims to reduce ME to serfdom. I am not as much a pessimist as I am a realist, and I do not see much movement from humanity to save itself from the whims of psychopathic behavior AS A GROUP. I have worked dutifully for the past decade, and will continue to do so, with a goal of self-preservation by becoming as self-sufficient as humanly possible within the restrictions of limited resources – which in itself separates me from the NWO creeps who seem now to have unlimited amounts of money and influence. Nothing would please me more than to see aggressive action taken by an organized force of aware world citizens to combat the scum of sociopaths who believe they have some divine right of power and control over mankind. I simply do not favor waiting for that to happen, so I will take matters into my own hands, as I think it is everyone’s destiny to do anyway. I consider myself to be a religious person, but my beliefs do not correspond to any organized religion or dogma. I merely do not see this universe happening by some accidental occurrence. In that vein, I would hope that divine intervention into this debacle would at some point be undertaken by the real powers-that-should-be to put a stop to it. All that said, I will continue to follow your absolutely invaluable research into both the motives and the methods of the compassionless. It may be the best weapon we have in our arsenal – awareness. Keep up your exceptional efforts, for my sake and everyone else’s – you are a true patriot of planet Earth.
Your comments are smart. There’s no need for violence to occur. Not that I haven’t wanted to take an AK and wipe out the entire lot from time to time, myself. Believe me when I say, the Cabal is all coming down very, very soon. Crashing worldwide. When that happens you will need to avoid the mainstream media talk as it rises in timber/tone and avoid violence of any kind while focusing on what you can positively accomplish. Help your neighbor (even if you think he’s a dick) and we can all move into a healthier world using some of the technologies we paid for all our lives but haven’t gotten to use. We will finally be able to replace the outdated ones–the combustion engine, for example. Ugh!
Just leaving this here
http://motherboard.vice.com/read/watch-these-cute-robots-struggle-to-become-self-aware
I agree heavily on the first point–NO FEAR–not even with regard to the psychopaths. Be aware. Do not fear. It only adds to negative energy and feeds the evildoers who want it. Positive, happy, smile, laugh, love, hop, skip and any other adjective you want to fill in starves them because these thoughts do not key on–FEAR or the negative.
However, I’m not so sure the psychopaths can ever unplug the next quantum generation of computers once completely proofed, nor would they want to. If any of you would like to read what’s been happening and the reason I state this (with links from our illustrious lapdog government, their ancillary agencies, their buddies across the globe and associated alphabet agencies, etc.–all open source information gathered from their own official documents online) send me an email. I transcribed the original interview with the compiler for a friend. It was almost 3 hours, so you can skip the babble if you read it. The transcript is 23-pages–not short. It’s a must read of the psychopath/sociopath workings of the Cabal driven on an AI quantum platform that appears to be in place now. The Jade Helm 15 exercises also appear to be a boots-on-the-ground trial for the entire quantum chip driven system. Of course, it’s unlikely that the Spec Op participants have any knowledge of the real reason they are making war on American soil.
chantspire@hotmail.com; Subject Line: SEND me the Transcript.
And I’ll do it.
Yes, apparently. Do you want the transcript mentioned in my earlier comment? If so, send me an email.
For at least the span of our lifetimes, non-deterministic general-purpose AI is and will continue to be a red herring. What we will face first is avatar robotics (human-in-the-loop) enhanced by autonomous subroutines, and a smaller subset of fully autonomous machines tasked to highly specific missions.
For human-controlled avatars, autonomous subroutines will transparently take care of basics such as walking/running/flying across terrain. The human operator will only need to tell the machine where to go and what mode to travel in (max speed, max stealth, max energy efficiency, etc). Autonomous subroutines will handle details such as balance and obstacle avoidance.
Until relatively recently, using a robotic avatar would have been constrained by the range of the RF control loop, and the bandwidth of same in regards to feeding back high definition imagery and other data to the operator. That issue has been neatly resolved in many if not most places by the 4G network, and of course even higher bandwidth is forthcoming.
As for autonomy, the DARPA competitions illustrate how an autonomous robot could be tasked to enter a hazardous area and carry out very specific tasks, such as closing valves. It’s not much of a stretch to envision an expendable autonomous robot being tasked to enter a structure on an assassination or abduction mission. The robot would have a detailed map of the interior, it would “know” how to breach doors or windows as necessary, and it would use facial recognition or some other pattern-matching algorithm to discriminate targets.
The definition of “reasonable precautions” in terms of personal security is going to change drastically. Door locks and burglar alarms will not deter an expendable and powerful machine on a mission.
Our individual choice is anticipate and adapt, or find ourselves at a disadvantage. There is no way back, other than immolating civilization. If by some miracle the entire US infrastructure forswore and outlawed autonomous robotics, the Chinese, for one example, would not. The technologies of the robot future have already been developed, down to even the comm network. And you thought 4G was for streaming your entertainment…
At this point, I fully believe (other than the magic aspect), that the prophecy of the RPG tabletop and also SNES and Genesis classic games and now a really great game on PC that renders justice to “future” AD&D it was classed as has come true. Seriously, get the second edition (the third edition although superior to play with doesn’t have the same backstory as to how the world is in 2052 in the Shadowrun 2nd edition main book, the one you could only use to play, the third edition had some changes to the story but anyway.
One day, we’ll have to have SIN cards, not implants, unless you want to, but that’s more expensive, and the rest of us will be SINless. SIN standing for System ID Number, acronyms to SIN and is pretty much the wet dream of christians thinking the code bars on goods are the mark of the beast, as these cards would contain all your info, no need for a wallet, its a debit/credit/drivers license/college card/access card at work/etc. The SINless fight the corporations with weapons, countries becoming mere tools of corporations by 2062. Also unfortunately, since the magic element isn’t there, half of north america isn’t fully regrown magically with sequoias and huge mountain tops that were sawed off and all lakes and all ecology in the western part of america and almost all of Canada, except Seattle which is an exclave of the UCAS, US of Canada and America. I’m pretty sure if I have children, and that’s a big if, I have a decent enough salary to bring someone into this world, but do I want to ? If he he’ll have to live as an outlaw if he puts into practice everything I will teach him or her. Which means weapons, unfortunately, but also radio jamming to fuck with cops and well, you see where this could be going.
And that’s almost the positive of it. I don’t think AI’s will be ever anything like Skynet with a nuclearized space with missiles pointed at our beautiful planet, just to make it blow away.
These people must really hate themselves to want to die so much. Must be all the snuff films they’re apart of.
It seems to me that the products of scientific inquiry, which will naturally be in progression as humans are want to expand their knowledge as a part of our DNA, that the use of new knowledge needs proper controls. Proper control should not have profit motive, technological advantage over others or other societal harming ideologies-strategies as a basis of control and review of new knowledge. We have no need of kill bots unless some dystopian vision of policing is in the works. At the international level we have states run by oligarchs who have their finger on the nuclear weapons button so the need for kill bots at this level is redundant but will be pursued no doubt as part of the MIC’s next generation of must have war violence products. Kil bots are being developed by the same command and control centres we all seek to at least remodel…so kids don’t forget to smash the state.
I’m honestly pretty surprised that the consensus here so far seems to lean towards the idea that AI, in general, but in warfare specifically, is not something which should be looked at as an imminent threat. Personally, I don’t find it that hard to imagine a Terminator/Skynet type scenario particularly far fetched, at least in some shape or form. Ultimately the practical nature of how it is that AI interfaces with society will largely have to do with the nature of the context in which it’s designed to function. Given the fact that most of the high level experimentation that’s going on in the field is geared towards military use, it’s hard for me to imagine how AI with this disposition could function without becoming a serious risk.
In theory, strong AI would be able to advance beyond the human capacity for intelligence, as a piece of technology potentially could continue to advance far beyond the biological limitations that dictate human intelligence. Yes, this is a long way off, but I have little reason to believe that the mentality of the elite and the military industrial complex will evolve, even in the face of serious concerns that surface as the technology advances. (“Well Bob, RoboHog decimated the entire population of Afghanistan, including all of our personnel, but otherwise the experiment was wildly successful…”)
Whether or not some form of altruism could be hardwired into a sentient robot is a stand alone argument with its own ethical implications. The recent movie “Ex Machina”, which I thought was brilliant, touches on this. One of the questions posed in the film is whether or not the AI robot being tested is actually genuinely demonstrating empathy and self-awareness or just pretending to do so in order to manipulate those administering the test. Cleverly, the film doesn’t give us a conclusive answer, however it’s a perfect example of how the psychopathic elite carry out their agendas by manipulating and fooling enough of the general public into accepting the “official” narratives and lies which mask truths that would defy even the most basic levels of humanity or morality.
I think artificial intelligence with some sort of boundaries when it comes to defining enemies is inherently problematic as the superficial, irrational, and contradictory nature of warfare makes the drawing of such boundaries a concept which defies logic in the absence of the emotional spin and propaganda which is used to justify and perpetuate it. If a robot/machine is designed to make autonomous decisions about how it goes about defining self preservation or elimination of “threats” it’s not that difficult for me to imagine how some of the nightmarish scenarios found in science fiction films could come to be reality.
On a side note: there’s a great short story by Philip K. Dick, “The Defenders”, which is about a post-nuclear war scenario which is being managed by robots above ground where humans can no longer safely operate. The robots are essentially serving as the post-human representatives for the main superpowers, the United States and the Soviet Union. It’s quite clever and is clearly taking a piss out of the absurdity of the mentality of war, particularly nuclear warfare. It’s nothing like Terminator, Robocop, or Bladerunner. However, Blade Runner is actually an adaptation of Phillip K. Dick’s “Do Androids Dream of Electric Sheep?”. Both the film and the book are favorites of mine respectively.
“The Defenders” is within the public domain and is widely available as an ebook. I highly recommend checking it out.
On an unrelated note:
James, I think your comment about being “back at the wheel” (or “back behind the wheel”) wasn’t necessarily the best choice of words to wrap your recent podcast regarding digital automobile hacking “conspiracy theories”. 😉
I’d say one of the general takeaways from the film was that it would be extremely difficult, if not impossible, to test the consciousness and capacity for empathy in a robot with strong AI in a way which is ethical. Nathan, the creator, is a narcissist, willing to take advantage of those involved in the experiment to find out whether his creation demonstrates “true” AI, but in the end is he right that not evaluating this creation’s capacity to demonstrate that they posses a degree of genuine empathy towards humans a potential threat to humanity? How much of the risk associated with the experiment he’s working on is associated with the way he’s going about doing it? As you mentioned, are the potential faults of his creation merely a reflection of his own?
Ultimately, as I indicated in my previous comment, those with the necessary resources to actually fund the development of a sentient robot probably wouldn’t be the kind of role model you’d want for anything that would genuinely contribute more to society than what they extract or degrade. Bill Gates wonders aloud why ‘some people are not concerned’ about research in the field of AI. Given the fact that it would most likely be someone like him creating this type of entity, this is one point which I might actually agree with him on. A robot with the capacity to “innovate” beyond what Gates is able to do is not the kind of experiment I’d like to see come to fruition.
Regarding “Under The Skin”, I’m not sure whether having a better idea of what was actually going on ahead of time (to the extent that this would be possible) would’ve made this film any less disturbing. I saw a funny comment on IMDB when I was trying to figure out what the hell I just watched, where a user laughed about the idea of the response on behalf of men who watched the film just to see Scarlett Johansson nude. It’s sort of an appropriate analogy for what plays out on screen.
“the very act of attempting to create consciousness is itself unethical. Unless you’re sure to succeed and allow the “creature” to live a fulfilling existence… if you see what I mean…?”
Precisely.
This is a great comment, AoC. It’s inline with what I was thinking when I was watching the film. Ava, in essence, functions precisely the way the psychopathic political elite class do. They’re “human” enough not to give themselves away, even to convince us that they actually care about us and have feelings themselves, but in reality their own self-interests are what carry the day. It’s easy to make decisions that further your agenda when you don’t have to be weighed down by ethical standards or consequence. If your only goal is to increase profits as a corporation, you go where the cost of labor is the cheapest, regardless of whether you destroy the livelihoods of thousands of people in one place and put the lives of another group at serious risk to increase profit margin at the expense of the most basic safety standards. If your goal is to secure the natural resources of a region, the plight of those who’s lives you throw into chaos to implement the control over the resources and infrastructure aren’t consequential.
It’s an unpleasant yet logical progression. People gravitate towards the type of leadership in times of war for example, because what’s needed is somebody who can make the best strategic decisions without being bogged down by the moral implications on a micro level. In times of peace however, the same type of behavior promotes the same type of heartless and rash decision making when logical necessity no longer dictates these measures.
As I indicated previously, it’s not necessarily that I think the idea of artificial intelligence on some scale is, in and of itself, an inherent risk, but based on the types of tasks such entities would likely be programmed to perform and on who’s behalf they’re performing them (those writing the checks to fund the research), the end result, in my opinion, would almost undoubtedly be something hazardous. The only real restriction on the super elite when it comes to achieving their objectives is the intellectual mechanics of outsmarting society as a whole and interest in self-preservation: i.e. – not finding one’s self dangling on the pitchfork of an angry mob.
If artificial intelligence were to reach the threshold of singularity, based on the afore mentioned, to quote Shane Legg, Co-Founder of Google Deep Mind; “If a super-intelligent machine decided to get rid of us, I think it would do so pretty efficiently.” Essentially, the super elite are only interested in preserving us to perform necessary tasks as workers, or as entertainment; artists, musicians, gladiator-like sports figures. I think transhumanists like Ray Kurtzweil envision singularity as a sort of fusion with technology, but within that I think lies some of the hubris of these sorts of figures, like Bill Gates for example.
In another set of sci-fi movies movies, the “Alien” series (Alien, Aliens, and Prometheus being among my favorites), you have two main villains: the Xenomorphs (Aliens) and the Weyland/Weyland-Yutani corporation, who (not including the Prometheus plot) are interested in obtaining the Xenomorph to use for their bio-weapons devisions. The corporation, in very realistic fashion, puts the lives of those who come directly into contact with this threatening species in jeopardy in order to utilize this hostile creature for their own sinister purposes. However, it’s proved time and time again that efforts to control such a dangerous species, who’s only apparent goal is to survive and reproduce at basic primordial level, ultimately fail and it’s the hubris of those who believe they are smart enough to control such a dangerous entity to serve their own goals which ultimately leads to there own demise.
Unlike the concept behind Alien(s), I don’t think artificial intelligence (or various attempts at an autonomous robot) is by default inherently bad or dangerous per se. However, so long as it reflects the interests of the super elite, I think there’s good reason to be very wary of efforts to advance this sort of technology. I think many of the solutions to problems we face as a society could be positively addressed through the harnessing of technology. I’m not a rejectionist of technology by any means, but I think we have to always remain conscientious of the fact that the goals of most of those who are presenting advances in technology, from gadgets, to games, social media, to useful programs that simplify mundane tasks, we have to always be wary of the concept that the underlying motives of those pushing the technology (not necessarily those developing it) are, at best, only superficially altruistic. To master the technology without being mastered by the self-appointed “masters” of technology, you could say, is something worth striving for.
“ Personally, I don’t find it that hard to imagine a Terminator/Skynet type scenario particularly far fetched, at least in some shape or form.”
With respect, if your perception was informed by experience in software specification and coding, and a career working with computing platform hardware, rather than works of fiction, the Skynet scenario would not be an immediate concern.
If within our lifetimes we are led to believe some algorithm has become “aware” and gone “rogue”, it will be false flag cover for typical human skullduggery.
I think I’ve failed to articulate my real point here, so I’ll attempt to do so.
My concern about AI doesn’t have to do with robots arriving at some sort of ‘consciousness’ and turning against “us”. We already have a class of cold psychopathic elite which carry out this function. Advances in technology simply allow them to carry out their agendas more efficiently and effectively. When I mentioned a ‘Terminator/Skynet scenario’, I’m not talking about an algorithm where robots become “aware” or go “rogue”, I’m talking about a scenario where a piece technology (military/weapons in particular) does what it’s “supposed to do”, but perhaps in ways which even those who designed it hadn’t intended.
As someone ‘informed by experience in software specification and coding, and a career working with computing platform hardware’ I’m sure you’re well aware that you can make a small change to one thing which leads to a cascade of errors which can become a debugging nightmare, particularly if it’s not spotted immediately, temporarily rendering whatever you’re working on partially or completely non-functional. This could mean having clients screaming at you, massive headaches, and sleepless nights, but in most case scenarios nobody’s actually going to get killed as a result (although there may be threats ;-). Our collective military firepower is dangerous enough as it is. I find the concept of designing aspects of it to perform tasks, normally carried out by humans, autonomously pretty frightening.
Hopefully this clears things up some.
Also, just for additional clarity, when I’m talking about “robots” in military or policing situations, while the bar for conduct is often remarkably low, there are in some cases limitations to what people are willing to do. Or at least how long you’re able to control them with various forms of threats or propaganda to continue doing so. If not, beyond that, if it’s at the cost of life and limb, eventually something’s likely to give. A robot or other piece of military technology, again like the super elite, does not have a conscience and therefore will carry out the task it’s been programmed to do without reservations and without fear of death.
The other comment is “awaiting moderation”, so if this doesn’t make sense as an “also” that’s probably why…
Right, LiquidEyes. Robots and whatever extent of autonomy they are equipped with, should be viewed as what they are fundamentally, which is a toolset. Man is a tool user.
While it makes good science fiction, the notion of a self-aware AI being given unrestricted control of WMD is just not in the cards. Human beings are not going to hand over that kind of power to a non-deterministic algorithm in the foreseeable future – it’s not in our nature.
An analogy is firearms: They are as dangerous or beneficial as the user, and for better or worse there is no going back to a world without them.
Society is never fully prepared for innovation, it always has to catch up after the fact.
I don’t know, Knarf. If human beings are psychotic enough to have created WMDs in the first place, with enough nuclear weapons to wipe out all traces of life on the planet probably one hundred times over, I’m less than confident that the ruling class will exhibit the capacity to make any rational decisions on whether or not it’s prudent to give R2D2 clearance to fire off the nukes should the technology arrive to do so “safely”. We’ll just end up with a droid race. What could possibly go wrong? 😉
Who knows though. Maybe the AIs will realize the ruling class (not the average person) are a threat to mankind and do us all a favor and neutralize them before they destroy all of us, AIs included.
“…enough nuclear weapons to wipe out all traces of life on the planet probably one hundred times over”
Life on this planet (and almost certainly elsewhere) has survived staggering cataclysms, far beyond what could be theoretically inflicted by detonating the entire human nuclear arsenal. We really aren’t that powerful.
But in fact the situation with nuclear armaments is not as officially described. Big surprise, governments lie and fear-monger. Consider for a moment how all our fears concerning nuclear weapons, ALL of them, are ultimately referenced to the “official line”. Why should they tell the truth concerning this particular subject, and virtually nothing else?
I’m not sure it’s a matter of whether or not the authorities are ‘telling the truth’, as you put it, more than it is a matter that we’ve seen (courtesy of the United States) what the aftermath of a nuclear bombing looks like and it’s certainly not something I’d want to see repeated. I agree with you that we have no reason to believe any of the “official” accounts of what various nuclear programs consist of, what the effects of some form of catastrophic series of detonations would look like, and that we could probably survive as a species, even under the worst case scenario. Again though, that would probably be a pretty grim aftermath. (I know you’re not arguing otherwise)
I honestly appreciate your optimism though, Knarf. I think there’s something to be said for not letting the worst case scenarios overwhelm you. Particularly when there’s a constant effort on the part of TBTsB to use various forms of apocalyptic alarmist rhetoric to advance various nefarious agendas.
“ If everything that we used to do, like make music, write, build, compute, make families is supplanted by boundless robotic AI technology, what is left for man? “
We’ve had ubiquitous computing for at least 30 years now and it’s a fair question if the technology has enhanced or hindered human creativity on a net basis. For me personally it’s been a wonderful tool, but on the other hand I don’t have a “smart phone” Skinner Box to absorb every free moment and distract my driving.
There’s nothing special about placing the term “robotic” ahead of “AI”, BTW. A computer is a computer, an algorithm is an algorithm. A robot is actually the last place to expect to find state-of-the-art AI, because of the constraints on the quantity of hardware and available power. Intense computing sucks power and generates heat, just ask anyone who participates in grid computing.
In fact the primary constraint on robotics now is power density. We are nowhere close to building a machine which can store and convert energy as efficiently and quietly as the human body.
To a degree, I liken it to the gun debate. Guns don’t kill. the people aiming them are the problem. I can see how it would be the same with AI and the programmers/owners.
As for AI true self awareness. I suspect it will never happen. We don’t fully know or understand our own minds (certainly useless at fixing them). I find it difficult to imagine us capable of creating a development matrix for AI self awareness when we don’t understand what it really means.
Thanks for sharing, AoC. nosoapradio and I discussed this film earlier here.
While I haven’t read through every single comment, none of the comments I’ve read here seem to address the gargantuan elephant in the room…
I don’t think that the agenda here is to create AI and force it on humanity or address the fear of AI becoming self-aware. I think it is to sell us ‘the idea of AI’ which has been shown to fascinate people. Once we buy into it, it can be integrated into our lives to a point where we are dependent on it. It can be woven into military and other spheres, and then of course laws can be used to solidify the integration. Then the psychopathic social engineers can slowly take control of these ‘robots’ to use as a direct control mechanism. Thereby removing the pesky human element that has empathy from the chain of command. People need to understand that the so-called elite are control freaks that have zero empathy. The only reason why it is taking so long for them to completely dominate us, is because they have to delegate to and manipulate humans. Remove that and you have very sick in-humans with joysticks controlling very sophisticated machines which are now integrated into our everyday lives.
So yes, I do think it is very big problem! That definitely must be addressed!
And furthermore, I would suggest that one really effective solution may be to force all technology to open source, so that there is never secret central control and always mechanisms to disable certain things.
Just a thought.
Right, AoC. As, I attempted to highlight in my previous comment, my concern is not that robots would go “rogue” or turn against the population, it would be just a matter of them becoming functional to level where the expendable class becomes more and more obsolescent to the elite. If a robot can do the same task or better than their human counterpart it’s not particularly difficult to imagine which option the elite are going to choose. While I believe I’ve also stated that the bar for conduct on the part of military and law enforcement conduct working on behalf of the system is typically depressingly low, there are limitations to what these forces are willing to do (even if they’re minimal). With robots or autonomous weapons systems the elite don’t even have to worry about placating those who enforce their policies.
All of this said, while there’s much that disappoints me about societies as a whole, I do believe in the greater good of the human spirit. As you mention, open source software has truly been revolutionary in allowing people to cooperate, share ideas, and break new ground outside of the traditional professional realms in the past where a specific job or company position was necessary to have access to certain technologies and what you were allowed to do with these tools, at best, was subject to approval.
It’s important to be wary and realistic about the ways technology is being used against us, but it’s just as important, if not more so to remember that there’s strength in numbers and in the human spirit. There’s no reason to believe we can’t harness the power of technology to outsmart the rulers and come up with solutions to problems working, networking, and cooperating outside the boundaries of the traditional top down structures of power.