- 25 May 2023 00:15
#15274984
There's been a trope for a long time now that people will some day be able to upload their brains to a computer and live forever. Even if this were possible, I think there's a lot of philosophical and ethical problems with this.
First and foremost, if your brain were theoretically represented and stored as computer code, wouldn't this make everything about your memories and personality editable to anyone with write access? As we've seen with politically correct "AI" chatbots recently, even if people were able to upload their brains to a computer or whatever, they would probably be stripped of the ability to say anything that is politically incorrect. There would also be issues with cyber attacks and what would probably be the necessary maintenance and bugs that would need to be worked out with such a complex system. In other words, editing these "people" would probably be completely unavoidable, at which point the sheeple would have to start pretending that no one would ever make an inappropriate edit to a digital haemonculus.
And this brings me to my second point, which would be, what would qualify as having successfully uploaded someone's brain to a computer? Surely later versions of such a technology would be better than earlier versions. So if a supposed AI system is 90% you, does that still count as you? How does it match up to a later model that is described as being 95% accurate?
Third, we can imagine even more obscure problems. Should we allow an admittedly imperfect copy of a person to exert their opinions upon the real world, such as allowing them to vote? When we consider the fact that these people will necessarily be editable and in many ways different from a flesh and blood person, it would raise a lot of dilemmas to have them making decisions regarding what should be done in a real world they no longer live in.
My personal opinion is that this will never really be possible, although enough people may want it to be possible that they start treating it as if it is real, at which point I think a lot of questions will need to be asked. To explain what I mean by that, it is already possible to create chatbots that can convincingly imitate people's social media presence. Assuming it was advanced enough and augmented with extra forms of data, someone might leave their will to their chatbot or something, thereby theoretically conferring some legal rights upon it. I think it'd be pretty interesting to see how people react to that in the event where the chatbot is able to argue that it deserves what has been bequeathed to it
First and foremost, if your brain were theoretically represented and stored as computer code, wouldn't this make everything about your memories and personality editable to anyone with write access? As we've seen with politically correct "AI" chatbots recently, even if people were able to upload their brains to a computer or whatever, they would probably be stripped of the ability to say anything that is politically incorrect. There would also be issues with cyber attacks and what would probably be the necessary maintenance and bugs that would need to be worked out with such a complex system. In other words, editing these "people" would probably be completely unavoidable, at which point the sheeple would have to start pretending that no one would ever make an inappropriate edit to a digital haemonculus.
And this brings me to my second point, which would be, what would qualify as having successfully uploaded someone's brain to a computer? Surely later versions of such a technology would be better than earlier versions. So if a supposed AI system is 90% you, does that still count as you? How does it match up to a later model that is described as being 95% accurate?
Third, we can imagine even more obscure problems. Should we allow an admittedly imperfect copy of a person to exert their opinions upon the real world, such as allowing them to vote? When we consider the fact that these people will necessarily be editable and in many ways different from a flesh and blood person, it would raise a lot of dilemmas to have them making decisions regarding what should be done in a real world they no longer live in.
My personal opinion is that this will never really be possible, although enough people may want it to be possible that they start treating it as if it is real, at which point I think a lot of questions will need to be asked. To explain what I mean by that, it is already possible to create chatbots that can convincingly imitate people's social media presence. Assuming it was advanced enough and augmented with extra forms of data, someone might leave their will to their chatbot or something, thereby theoretically conferring some legal rights upon it. I think it'd be pretty interesting to see how people react to that in the event where the chatbot is able to argue that it deserves what has been bequeathed to it
Lmao, I guarantee you no fund manager is driving an ETF based purely on spite. -- some guy out there actually believes this.