Home » General Discussions » Heated Discussions and Debates » Is SkyNet Possible?
|
|
|
|
|
|
Re: Is SkyNet Possible? [message #395214 is a reply to message #395181] |
Thu, 16 July 2009 12:53 |
_SSnipe_
Messages: 4121 Registered: May 2007 Location: Riverside Southern Califo...
Karma: 0
|
General (4 Stars) |
|
|
jnz wrote on Thu, 16 July 2009 09:00 | All a load of total bullshit, a computer will do what it's told to do. No matter how fast it is. If someone engineers a computer with arms, legs and sensors, being self aware would not make it any better than the current androids. Even if it did have arms, legs and sensors it would need to be programmed to do something. It would not become "self aware" unless the programmer intended it to. As of yet, there is not even a principle for it. People have gone as far as emulating the 100s of billions of neurons and synapses in the brain but it takes weeks using 1 mega watt of power to emulate just 1 second. Also yet again, it is still simply following what the programmer told it to.
|
Thats the smartest thing I have seen in these forums, thanks
|
|
|
|
Re: Is SkyNet Possible? [message #395222 is a reply to message #395202] |
Thu, 16 July 2009 15:00 |
|
nikki6ixx
Messages: 2545 Registered: August 2007
Karma: 0
|
General (2 Stars) |
|
|
CarrierII wrote on Thu, 16 July 2009 13:15 | My own concern is that it will look at 4chan, then decide we need to be wiped out.
|
That, or it'll just become an hero when it realizes that we're not even worth saving.
Renegade:
Aircraftkiller wrote on Fri, 10 January 2014 16:56 | The only game where everyone competes to be an e-janitor.
|
|
|
|
Re: Is SkyNet Possible? [message #395235 is a reply to message #395207] |
Thu, 16 July 2009 18:18 |
|
Ethenal
Messages: 2532 Registered: January 2007 Location: US of A
Karma: 0
|
General (2 Stars) |
|
|
mrãçķz wrote on Thu, 16 July 2009 13:52 |
CarrierII wrote on Thu, 16 July 2009 13:15 | My own concern is that it will look at 4chan, then decide we need to be wiped out.
I doubt self-awareness is even possible, we don't fully understand how our own brains work, how can we hope to emulate them?
|
Oh what aimbots do?
|
What aimbots do? Aimbots lock on to a bone in the engine... I can't imagine how that's anything close to emulating a neuron. Weirdo...
But anyway, I'm with jnz on this one. Don't think that's possible.
-TLS-DJ-EYE-K wrote on Mon, 18 March 2013 07:29 | Instead of showing us that u aren't more inteligent than a Toast, maybe you should start becomming good in renegade
|
[Updated on: Thu, 16 July 2009 18:18] Report message to a moderator
|
|
|
|
Re: Is SkyNet Possible? [message #395284 is a reply to message #395235] |
Fri, 17 July 2009 04:44 |
nopol10
Messages: 1043 Registered: February 2005 Location: Singapore
Karma: 0
|
General (1 Star) |
|
|
jnz is right. The insertion of "sentience inhibitors" to prevent self-awareness makes the assumption that the program can even be self-aware (which isn't possible) and this makes the whole idea silly. But science fiction has silly but wonderful ideas, which makes it enjoyable.
nopol10=Nopol=nopol(GSA)
|
|
|
Re: Is SkyNet Possible? [message #395425 is a reply to message #395110] |
Sat, 18 July 2009 15:41 |
|
slosha
Messages: 1540 Registered: September 2008 Location: North Dakota FTW
Karma: 0
|
General (1 Star) |
|
|
I don't think it's possible. I don't think we could ever understand just how we work ourselves, let alone imitate it with a computer.
The road I cruise is a bitch now, baby.
|
|
|
Re: Is SkyNet Possible? [message #395467 is a reply to message #395110] |
Sat, 18 July 2009 21:09 |
|
R315r4z0r
Messages: 3836 Registered: March 2005 Location: New York
Karma: 0
|
General (3 Stars) |
|
|
If anyone here has played the game Mass Effect, the Geth are a good example of a computer becoming self-aware and becoming their own aggressive faction.
For those of you who haven't, it's a simple concept to explain.
They were originally humanoid-computers (not made by humans, but had a head, two arms and two legs) designed to serve their owners by doing chores and tasks. Little by little, they were upgraded in order to perform their tasks better. However, they eventually started to question their owners as to why they were created and what their purpose for life was. So, for fear of the Geth becoming problematic, they attempted to shutdown and dismantle them all. However, fearing for "lives" they fought back.
Now, obviously using another fictional story to answer a question based off a fictional story wont really answer anything.. however it lessens the impact of what jnz said earlier.
A machine can become self-aware rather easily, in concept. You just need to give it the ability to learn from its observations and actions and work accordingly. It doesn't have to be like a PC that does it either.
Also, our technologies are definitely getting closer to accomplishing that task. Did anyone see the Natal project that Microsoft is working on for the Xbox 360? That Milo program is definitely our first steps into the realm of technological self-awareness. Milo may just have the forefront of being self-aware, but it's only a hop, skip and a jump away from achieving a fully self-aware program.
Now, if you're asking if the machine will then turn on humanity and start a war, then that's a different story. A machine becomes self aware, you just need to treat it like any other life form that has feelings towards its own life. (Either that, or you don't make it at all.)
|
|
|
Re: Is SkyNet Possible? [message #395485 is a reply to message #395467] |
Sun, 19 July 2009 00:05 |
|
Starbuzzz
Messages: 1637 Registered: June 2008
Karma: 0
|
General (1 Star) |
|
|
It doesn't lessen the impact of what jnz said in the least. I find it to be a load of smokin' bull.
So the Geth started to question their owners? That's a cop out. They questioned only because the question was coded in to be asked.
It can do only what it is programmed to do; like react to your actions in that Milo demo. Don't be fooled by Milo...all you have to do is remove your sensor belt and you can kick at the TV all you want, Kung Fu man will just stand there lookin' at your balls. It can learn such as sensing your body movements and choosing a optimal code to execute back in return. But if no instructions are present, it cannot react.
This is what any advancement in such technology may bring in the future: optimizing and expanding the level and type of operations that can be performed. It will need untold amounts of programming and/or human input (Global Hawk and Predator are primitive examples). Even if you were to teach A-Z of all known knowledge and code in as many scenarios as possible, it still cannot think on it's own.
You can walk up to it and ask it to give you a blowjob. Now if your question is programmed in and variables are in place to allow it to respond with a positive or negative response, then you most likely will be getting your blowjob. If not, it will just stand there sniffin' at your smelly balls and you would need to try again later.
If a killer machine-gun-mounted robot is built with heat seeking sensors that is programmed to automatically fire on targets that emit heat, then it will do just that. It will not make a conscious decision to fire but it will only carry out the actions that are programmed for it (i.e, auto engage weapon at heat emitting targets).
How can a massive processing unit such as Skynet suddenly become self-aware, form it's own goals and motives, and manage all controllable assets and resources to complete it's newfound objectives? It's inconceivable and IMPOSSIBLE.
We will definitely come to Terminator level type robots; fully programmed machines devoid of reason. Infact, I can bet we can make them completely human-like with vast algorithms; but they will still not be self-aware.
You think the massive hunter-killer tank in Terminator 2 was going around on it's own? In the movie, it is assumed to be so. But in reality, the tank is merely on it's programmed patrol route armed with heat sensing weaponry that kill whenever organic heat emitting material (such as human bodies) are detected.
Watch this video:
http://www.youtube.com/watch?v=peqEf5enXJs
Go to 0:54 on the video and watch carefully. You think that machine is making a conscious decision to fire? Will it EVER be capable of making such a decision? No.
If you look closely, all heat seeking sensors/radars are mounted on it's rotating top turret along with the floodlights (according to the artists who worked on that model). The sensors detect humans and based on that data, the programming allows the execution of the firing of the the twin plasma cannons. No conscious decision is being made by the hunter-killer to kill, though in the movie it is assumed to be so.
THIS is what I think we humans will be capable of achieving and the technology is being developed in DARPA's laboratories. It is our natural course of doing things.
But it's simply wishful thinking that they "somehow" will progress to Cylon/Skynet level self-awareness.
NOTE: This post a longer version of jnz's post.
|
|
|
Re: Is SkyNet Possible? [message #395486 is a reply to message #395485] |
Sun, 19 July 2009 00:18 |
|
Ethenal
Messages: 2532 Registered: January 2007 Location: US of A
Karma: 0
|
General (2 Stars) |
|
|
Starbuck wrote on Sun, 19 July 2009 02:05 | It doesn't lessen the impact of what jnz said in the least. I find it to be a load of smokin' bull.
So the Geth started to question their owners? That's a cop out. They questioned only because the question was coded in to be asked.
It can do only what it is programmed to do; like react to your actions in that Milo demo. Don't be fooled by Milo...all you have to do is remove your sensor belt and you can kick at the TV all you want, Kung Fu man will just stand there lookin' at your balls. It can learn such as sensing your body movements and choosing a optimal code to execute back in return. But if no instructions are present, it cannot react.
This is what any advancement in such technology may bring in the future: optimizing and expanding the level and type of operations that can be performed. It will need untold amounts of programming and/or human input (Global Hawk and Predator are primitive examples). Even if you were to teach A-Z of all known knowledge and code in as many scenarios as possible, it still cannot think on it's own.
You can walk up to it and ask it to give you a blowjob. Now if your question is programmed in and variables are in place to allow it to respond with a positive or negative response, then you most likely will be getting your blowjob. If not, it will just stand there sniffin' at your smelly balls and you would need to try again later.
If a killer machine-gun-mounted robot is built with heat seeking sensors that is programmed to automatically fire on targets that emit heat, then it will do just that. It will not make a conscious decision to fire but it will only carry out the actions that are programmed for it (i.e, auto engage weapon at heat emitting targets).
How can a massive processing unit such as Skynet suddenly become self-aware, form it's own goals and motives, and manage all controllable assets and resources to complete it's newfound objectives? It's inconceivable and IMPOSSIBLE.
We will definitely come to Terminator level type robots; fully programmed machines devoid of reason. Infact, I can bet we can make them completely human-like with vast algorithms; but they will still not be self-aware.
You think the massive hunter-killer tank in Terminator 2 was going around on it's own? In the movie, it is assumed to be so. But in reality, the tank is merely on it's programmed patrol route armed with heat sensing weaponry that kill whenever organic heat emitting material (such as human bodies) are detected.
Watch this video:
http://www.youtube.com/watch?v=peqEf5enXJs
Go to 0:54 on the video and watch carefully. You think that machine is making a conscious decision to fire? Will it EVER be capable of making such a decision? No.
If you look closely, all heat seeking sensors/radars are mounted on it's rotating top turret along with the floodlights (according to the artists who worked on that model). The sensors detect humans and based on that data, the programming allows the execution of the firing of the the twin plasma cannons. No conscious decision is being made by the hunter-killer to kill, though in the movie it is assumed to be so.
THIS is what I think we humans will be capable of achieving and the technology is being developed in DARPA's laboratories. It is our natural course of doing things.
But it's simply wishful thinking that they "somehow" will progress to Cylon/Skynet level self-awareness.
NOTE: This post a longer version of jnz's post.
|
And we have a winner.
Any programmer will tell you it's not possible. A computer will only do EXACTLY what you tell it to and nothing more.
-TLS-DJ-EYE-K wrote on Mon, 18 March 2013 07:29 | Instead of showing us that u aren't more inteligent than a Toast, maybe you should start becomming good in renegade
|
|
|
|
|
Re: Is SkyNet Possible? [message #395510 is a reply to message #395467] |
Sun, 19 July 2009 05:00 |
|
jnz
Messages: 3396 Registered: July 2006 Location: 30th century
Karma: 0
|
General (3 Stars) |
|
|
R315r4z0r wrote on Sun, 19 July 2009 05:09 | If anyone here has played the game Mass Effect, the Geth are a good example of a computer becoming self-aware and becoming their own aggressive faction.
For those of you who haven't, it's a simple concept to explain.
They were originally humanoid-computers (not made by humans, but had a head, two arms and two legs) designed to serve their owners by doing chores and tasks. Little by little, they were upgraded in order to perform their tasks better. However, they eventually started to question their owners as to why they were created and what their purpose for life was. So, for fear of the Geth becoming problematic, they attempted to shutdown and dismantle them all. However, fearing for "lives" they fought back.
Now, obviously using another fictional story to answer a question based off a fictional story wont really answer anything.. however it lessens the impact of what jnz said earlier.
A machine can become self-aware rather easily, in concept. You just need to give it the ability to learn from its observations and actions and work accordingly. It doesn't have to be like a PC that does it either.
Also, our technologies are definitely getting closer to accomplishing that task. Did anyone see the Natal project that Microsoft is working on for the Xbox 360? That Milo program is definitely our first steps into the realm of technological self-awareness. Milo may just have the forefront of being self-aware, but it's only a hop, skip and a jump away from achieving a fully self-aware program.
Now, if you're asking if the machine will then turn on humanity and start a war, then that's a different story. A machine becomes self aware, you just need to treat it like any other life form that has feelings towards its own life. (Either that, or you don't make it at all.)
|
"However, they eventually started to question their owners as to why they were created and what their purpose for life was"
This is the exact line I've been debunking in my post. It's possible to emulate self awareness, such as a robot wanting to learn more for example. Just not actually achieve self awareness. Also don't forget, it is extremely difficult to even emulate a very simple conversation. There are many many web bots out there that attempt it, all they do is pick out key words and attempt to reply. Not like out own train of thought. There is one bot, however, that is on the right track. It learns and upgrades itself from the responses and questions of other people. It asks them what "this" means, and saves the answer to a database. This gives it a good shot at AI, but it's still far from human.
|
|
|
|
|
Re: Is SkyNet Possible? [message #395516 is a reply to message #395110] |
Sun, 19 July 2009 05:26 |
|
DarkKnight
Messages: 754 Registered: May 2006 Location: Cincinnati, OH
Karma: 0
|
Colonel |
|
|
Skynet was just the example but their have been countless other movies of machines destroying mankind. Maybe they are just doing what their programmed to do. So what if the machine evolves to starting thinking on its own?
Take for example the following article. Imagine this same type of coding in say a police robot. What if it determines your worth killing and not saving. They still may not be self aware but in the end your still dead.
http://www.danshope.com/news/showarticle.php?article_id=90
from the article
Quote: |
A new robot, dubbed "Starfish" because of its size and shape, has the unusual ability -- in the mechanical world, that is -- of fixing itself. The Starfish is programmed to recognize its parts, but not how they're arranged or meant to be used. It figures that out for itself, using trial and error
|
Just imagine similar programming put into humanoid robots. Given any situation it figures out the best outcome.
nopol10 wrote on Fri, 17 July 2009 06:44 | jnz is right. The insertion of "sentience inhibitors" to prevent self-awareness makes the assumption that the program can even be self-aware (which isn't possible) and this makes the whole idea silly. But science fiction has silly but wonderful ideas, which makes it enjoyable.
|
We wrote about going in space before it was ever conceived as possible. Their are lots of sci-fi stories like this example of something thought up before it was ever became reality. Not all sci-fi is silly ideas
[Updated on: Sun, 19 July 2009 05:29] Report message to a moderator
|
|
|
Re: Is SkyNet Possible? [message #395529 is a reply to message #395110] |
Sun, 19 July 2009 07:24 |
|
Computers can only do what they are told to do, and only do what they are told to do.
Sentience is weird, it's not one algorithm, it's not even hundreds. It simply is. Even if you tell a computer to become self-aware, on a level it isn't, because it's been told to be self-aware, a computer cannot "make up its own mind", even if it's programmed to "think", everything it does is predictable.
(Oh, and any robot apocalypse will be met with en masse EMPing)
Renguard is a wonderful initiative
Toggle Spoiler
BBC news, quoting... |
Supporters of Proposition 8 will argue California does not discriminate against gays, as the current law allows them to get married - as long as they wed a partner of the opposite sex.
|
halokid wrote on Mon, 11 October 2010 08:46 |
R315r4z0r wrote on Mon, 11 October 2010 15:35 |
|
the hell is that?
|
[Updated on: Sun, 19 July 2009 07:24] Report message to a moderator
|
|
|
Re: Is SkyNet Possible? [message #395534 is a reply to message #395529] |
Sun, 19 July 2009 07:36 |
|
Dover
Messages: 2547 Registered: March 2006 Location: Monterey, California
Karma: 0
|
General (2 Stars) |
|
|
CarrierII wrote on Sun, 19 July 2009 07:24 | Computers can only do what they are told to do, and only do what they are told to do.
Sentience is weird, it's not one algorithm, it's not even hundreds. It simply is. Even if you tell a computer to become self-aware, on a level it isn't, because it's been told to be self-aware, a computer cannot "make up its own mind", even if it's programmed to "think", everything it does is predictable.
|
That's why I always enjoyed Asimov's versions of computers gaining sentience. I can't remember the name of the short story in particular, but I'll find it and post it here. Anyway, he's much more believeable about it.
DarkDemin wrote on Thu, 03 August 2006 19:19 | Remember kids the internet is serious business.
|
|
|
|
|
Re: Is SkyNet Possible? [message #395596 is a reply to message #395110] |
Sun, 19 July 2009 15:01 |
|
R315r4z0r
Messages: 3836 Registered: March 2005 Location: New York
Karma: 0
|
General (3 Stars) |
|
|
A computer will only do what its programmed to do. And if it's programmed to not do what it's programmed to do, then what happens?
lolparadox. :V
Also, @ Starbuck:
I said Milo may just be a "front" to self-awareness. Meaning that it gives off the illusion that he is self aware. But I also said that a true self-aware program is only a hop, skip and a jump away.
Yes a computer can become self aware if it's programmed to do so. However, a computer doesn't have to be programmed by a sentient being in order to perform that program.
You can program a computer to program itself through various means. Arm and leg attachments have nothing to do with it..
Think of it this way: when we are born, what do we know to do other than basic instinct? If a computer was programmed to follow key minor tasks but also be programmed to have the ability to learn the same way a newborn baby can, then you've just created a synthetic sentient being. (Think of it like a Star Wars droid.)
If it's given the ability to learn like a human, then it will because that's how it's programmed.
However, like I said, relating it to Skynet is a different story. Just because they would be synthetic, doesn't make them want to commit genocide.
[Updated on: Sun, 19 July 2009 15:07] Report message to a moderator
|
|
|
Goto Forum:
Current Time: Sat Nov 09 08:26:55 MST 2024
Total time taken to generate the page: 0.01277 seconds
|