1C Home   |   Register   |   Today Posts   |   Members   |   UserCP   |   Calendar   |   Search   |   FAQ

Go Back   Official 1C Company forum > 1C Publishing > IL-2 Sturmovik: Cliffs of Dover > Pilot's Lounge

Pilot's Lounge Members meetup

Reply
 
Thread Tools Display Modes
  #11  
Old 01-28-2013, 02:01 PM
ZaltysZ's Avatar
ZaltysZ ZaltysZ is offline
Approved Member
 
Join Date: Sep 2008
Location: Lithuania
Posts: 426
Default

Quote:
Originally Posted by KG26_Alpha View Post
Not really because they are not causing the "harm" in the first place.
If robot does not prevent the "saving" (what causes the harm) from happening, it will be guilty for practicing harmful inaction. However, if it does prevent the "saving", result will be a harm caused to other human being.
Reply With Quote
  #12  
Old 01-28-2013, 03:23 PM
raaaid raaaid is offline
Senior Member
 
Join Date: Apr 2008
Posts: 2,329
Default

i think now we can give FREE WILL and therefore a spirit to the equivalent of litle animals

the key is in the true random numbergenerator chips

maybe in 20-30 years some people will get a high from abusing as sentient as a human ai

this videoclips deals on that what is strange for its still far away:

Reply With Quote
  #13  
Old 01-28-2013, 08:06 PM
swiss swiss is offline
Approved Member
 
Join Date: Mar 2010
Location: Zürich, Swiss Confederation
Posts: 2,263
Default

Quote:
Originally Posted by SlipBall View Post
Why does Sarah Connor live in fear
Why do you think mankind deserves to live at all?
Reply With Quote
  #14  
Old 01-28-2013, 08:40 PM
raaaid raaaid is offline
Senior Member
 
Join Date: Apr 2008
Posts: 2,329
Default

thats an strange question like why hienas or vultures are allowed to exist

i maybe so deluded as to talk with god but not so insane to believe myself him or as him as to judge
Reply With Quote
  #15  
Old 01-28-2013, 08:52 PM
major_setback's Avatar
major_setback major_setback is offline
Approved Member
 
Join Date: Oct 2007
Location: Lund Sweden
Posts: 1,414
Default

Quote:
Originally Posted by KG26_Alpha View Post




AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.

'Computer disagrees with law 1.
Computer disagrees with law 2 as it is now invalid because law 1 was incorrect.
(Re: law 3) computer must protect its own existence and as neither law 1 or 2 are now valid -- kill all the humans!'





.
__________________
All CoD screenshots here:
http://s58.photobucket.com/albums/g260/restranger/

__________


Flying online as Setback.
Reply With Quote
  #16  
Old 01-28-2013, 09:14 PM
SlipBall's Avatar
SlipBall SlipBall is offline
Approved Member
 
Join Date: Oct 2007
Location: down Island, NY
Posts: 2,718
Default

Quote:
Originally Posted by swiss View Post
Why do you think mankind deserves to live at all?

Because of the laws of evolution and fire power... we are at the top for now. Our future is not cut in stone though, it may very well belong to the Bots one day.
__________________



GigaByteBoard...64bit...FX 4300 3.8, G. Skill sniper 1866 32GB, EVGA GTX 660 ti 3gb, Raptor 64mb cache, Planar 120Hz 2ms, CH controls, Tir5

Last edited by SlipBall; 01-28-2013 at 09:32 PM.
Reply With Quote
  #17  
Old 01-29-2013, 12:04 AM
WTE_Galway WTE_Galway is offline
Approved Member
 
Join Date: Jul 2008
Posts: 1,201
Default

Raaid ... watch Caprica ...
Reply With Quote
  #18  
Old 01-29-2013, 03:50 AM
Verhängnis Verhängnis is offline
Senior Member
 
Join Date: Apr 2011
Location: I come from a Sea, Up, Over. :)
Posts: 295
Default

Quote:
Originally Posted by KG26_Alpha View Post




AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.
The most recent that comes to mind in i,Robot, some elements are based on Asimov's book.
http://en.wikipedia.org/wiki/I,_Robot_(film)
Reply With Quote
  #19  
Old 01-29-2013, 04:01 AM
WTE_Galway WTE_Galway is offline
Approved Member
 
Join Date: Jul 2008
Posts: 1,201
Default

Quote:
Originally Posted by Verhängnis View Post
The most recent that comes to mind in i,Robot, some elements are based on Asimov's book.
http://en.wikipedia.org/wiki/I,_Robot_(film)
This issue is the also entire point of the Data character existing in SNG.

AI is also explored in many SNG episodes through various holodeck characters becoming sentient.

The Voyager episode "prototype" is also relevant.
http://en.wikipedia.org/wiki/Prototy...ek:_Voyager%29

Then you have the entire Terminator series of movie and followup TV series starring Summer Glau who played River in Firefly.

Not to mention the Matrix trilogy.

More recently the issue of AI being the enemy of mankind which was raised in BSG gets dealt with in more detail in the BSG prequel Caprica.

Plus lets not forget 2001 A Space Odyssey and the attempts of Hal to destroy the humans.

This is not a new theme nor has Raaid come up with any new ideas.
Reply With Quote
  #20  
Old 01-29-2013, 08:38 AM
SlipBall's Avatar
SlipBall SlipBall is offline
Approved Member
 
Join Date: Oct 2007
Location: down Island, NY
Posts: 2,718
Default

Memristor evolution

If Williams is right we should start seeing memristic memory devices in the commercial market soon, within 1 to 5 years. [3] Some skeptics claim that other technologies have more promise, like quantum computers, light-based computers, and IBM's 'Racetrack Memory,' etc. If so, then even better! But if Chua's contention that nanoscale devices automatically bring in unavoidable memristic functions, then it looks like no matter what kind of memory gets used, we will be forced to contend with memristance. So what kind of timeline should we expect if memristors do rule the computer world? No one knows but for speculation sake I think it could go something like this:
POSSIBLE INVENTIONS UTILIZING MEMRISTORS TIME1. memory for cameras, cell phones, iPods, iPads, etc. 1 to 5 years2. universal memory replacing hard drives, RAM, flash, etc. in all computer devices 5 to 10 years 3. complex self learning neural networks and hybrid transistor/memristor circuits 5 to 15 years 4. memristic logic circuits on par with CPUs and other transistor circuits 15 to 20 years 5. advanced artificial thinking brains 20 to 30 years? 6. artificial conscious brains ?7. memory and brains capable of living millions of years ?8. duty-cycle artificial conscious beings capable of interstellar travel ?
__________________



GigaByteBoard...64bit...FX 4300 3.8, G. Skill sniper 1866 32GB, EVGA GTX 660 ti 3gb, Raptor 64mb cache, Planar 120Hz 2ms, CH controls, Tir5
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 09:08 AM.

Based on a design by: Miner Skinz.com

Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
Copyright © 2007 1C Company. All rights reserved.