Leave these fields empty (spam trap):
Name
You can leave this blank to post anonymously, or you can create a Tripcode by using the format Name#Password
Comment
[i]Italic Text[/i]
[b]Bold Text[/b]
[spoiler]Spoiler Text[/spoiler]
>Highlight/Quote Text
[pre]Preformatted & Monospace Text[/pre]
[super]Superset Text[/super]
[sub]Subset Text[/sub]
1. Numbered lists become ordered lists
* Bulleted lists become unordered lists
File

Sandwich


Harm Reduction Notes for the COVID-19 Pandemic

The biological singularity

Reply
- Thu, 04 Feb 2016 19:54:08 EST 3xOkFk4I No.77623
File: 1454633648772.jpg -(147590B / 144.13KB, 1280x720) Thumbnail displayed, click image for full size. The biological singularity
the technological singularity is discussed and well known enough but wouldent the same assumptions more obviously lead to a biological one (assuming we are around after the former). the tech one is basically the unknown after we create a computer that can create a better computer than we can. however a computer (unless we specifically design it to) is not necessarily interested in procreation or its own survival. a biologically altered human though would probably still have those drives in tact (unless we remove it). not that an AI cant value itself and wish to spread or a modified human couldent do the opposite its just that its less likely. both are likely to be started by us but probably completely with very different motives. a biological organism -such as ourselves- is designed by evolution to spread and we may consciously remove that whereas an AI could develop it but it isnt an integral part of its original form. we are moving to a point where we can program genes almost as easily as we code an AI thus the exponential improvement in ability is just as feasible biologically as digitally. consciousness dosent automatically lead to a desire to continue; the childless humans (and more so the suicidal) have evidence to share on this. why should an intelligence, without an inbuilt need for survival and legacy, necessarily adopt both?
fundamentally wed be in the same boat as with machines in hoping they dont turn against their creators but in practice we are more likely to empower and pass on our need to continue to the latter. hell, we might even be cheering it on; it would be much easier for us to embrace designer babies than really advanced software.

tl;dr: we will soon be able to make computers that can make better computers than us. after a slight time lag we will be able to create humans that cant create better humans than us. the former will have the advantage of being earlier but the latter is more likely to be inherently interested in advantage and promoting the same.
not that its even automatically a bad thing since our creations will exceed us (by definition in this case) and will be able to do more than us. they probably wont automatically (see what i did there?) feel a need to turn on us and may well consider themselves part of us in both cases.

thoughts?
>>
Hugh Battinggold - Mon, 08 Feb 2016 11:38:47 EST FHFwCltH No.77636 Reply
bump for interesting post
>>
David Blapperridge - Thu, 11 Feb 2016 13:44:59 EST cIUKn2oY No.77649 Reply
The danger of malicious AI doesn't lie in the Matrix type robot rebellion tho. It's more about whether AIs will fully understand the implications of their actions in the same ways we do. Unless you simulate a human brain, AI won't ever be human. It will be something else entirely.

For example, lets say we program an AI to protect and take care of us. The AI's prime directive is to keep humans safe, so true to it's programming it traps all us in a never ending stasis. It has our best interests at heart, yet it effectively ended humanity. Or you make an AI to perfect some product, and it decides to use the entire planet for computational substrate in order to do so.

Kind of a cop-out, but you get the point right?
>>
Sidney Fanworth - Fri, 12 Feb 2016 03:53:13 EST X6HIP3d/ No.77650 Reply
>>77649
No. In fact I think you are intentionally shilling misinformation
>>
Nigger Mucklepune - Sat, 13 Feb 2016 06:26:29 EST +DZfgoAX No.77656 Reply
>>77649
That's essentially the premise of the grey goo scenario which was dreamed up by Prince Charles. It's plausible but you'd have to fuck some shit up badly to have it happen. Bear in mind that for it to occur you have to have that sort of tech in the hands of fuckwits, and if fuckwits can get it, then there's more powerful technology in the hands of other people who are more competent and aware of the dangers.

I get your point but you're talking about design faults. People will test and experiment and build failsafes on anything that's high risk enough and only roll out something that can end life as we know it if they already have a way to stop it.

Anyway I think that it's also quite possible we'll merge with the machines. Superhuman humans will just be a stopgap or an option for people unwilling to become a sentient nanobot swam or join the network.

Report Post
Reason
Note
Please be descriptive with report notes,
this helps staff resolve issues quicker.