Your sources were Rationalwiki and a YouTube video with no references or bibliography.
My "Claims" included a graph from the united states bureau of labour statistics and an MIT technological review article.
I'm not going to say the obvious.
Except that your references are irrelevant in the context of this conversation. You didn't provide any evidence for the fruition of a resource based economy via automation. You simply provided a source for automation. That is like saying that the sky is blue because of magical monkey farts, and then providing a source stating that the sky is blue.
There are a lack of sources on this topic which I do apologize for, but it is not my fault. This stuff is so fringe-wacko that it is only known about by techno-masturbators. The vast majority of people don't waste their time with this garbage, and thus there are a lack of sources. However, in my AI arguments, I used the book "Physics of the Future" as a source.
Although I do admit that my sources are not top-notch, you can not dismiss them. Look at them. They trash an RBE very well.
Here is another one:
I suggest you read and analyze it thoroughly and watch the videos attached.
I provided a source that was relevant. Speaking of automation and unsurely speaking of how automation will lead to either a resource based economy or corporate hell. I say unsurely because I even threw away the assumption myself. And concluded with
"I don't know but money will be very hard to get if jobs are very hard to get. That's for sure."
That is a very relevant assumption in itself.
According to the source too, I said myself again that the Zeitgeist movement and The Venus Project is bullshit. A society that doesn't deal with money is not. I will admit that I'm taking some of the ideology of the Zeitgeist movement and The Venus Project in exclaiming that scarcity is now becoming obsolete. (I'll go into this later)
"You're mixing natural language processing (NLP) with AI general intelligence. NLP is your basic chat bot that uses algorithms to give an appropriate response. The Chinese Room thought experiment seems centered around the criticism of the Turing test which I have to say, it is a shit test to evaluate AI intelligence."
"It was designed to criticize the Turing Test, but it applies to all artificial intelligences.
Also, I'm positive that am any rational person would prefer to see real people fucking than "photorealistic CGI"."
And I'm very positive no one would see the difference, hence the world "photorealistic". Pornstars wouldn't risk STDs and they can resort to getting money the old fashioned way (prostitution). That was a joke
"Now, according to this textbook (pdf here) the main attributes of AI general intelligence are:
- Reasoning, strategy, solving puzzles, that sort of thing.
- Knowledge representation & Commonsense knowledge representation
- Natural Communication
- Ability to use skills listed above towards goals"
"Doesn't matter, because they have no idea what they are doing. There is no evidence presentable that they do."
I already said this before, knowing what you're doing is in the same boat as spouting consciousness and sapience. I think I can giver a fairly good analogy to your previous comment and I hope this goes to share my perspective of what you're saying quite well,
Does a parrot know what it's doing what it's saying when it replicates words? Does that mean it's not intelligent?
"Those are overrated traits. I'm sure AI have already accomplished them. However, the biggest one of all is understanding what they are doing. AI is programmed. It follows a set of instructions given to it. That's all. We on the other hand are not programmed. We learn through experience.
There are two methods of creating artificial intelligence: the top-down approach and the bottom-up approach. The top-down approach programs all of the intelligence from the beginning. Thus, the AI follows all of the programming. Since all it is doing is following instructions, it is not understanding anything in the human sense. A top-down approach AI is:
Input -> Data Processor -> Output
The bottom-up approach is a neural network, following Hebb's rule. Hebb's rule means that it changes the strength of electrical connections between neurons every time that it completes a task, constantly rewiring itself.
The brain is not a computer.
Only a neural network can understand and learn like a brain can. Thus, to make an AI actually able to compete with a human, you need reverse-engineer the brain: build an artificial neural network.
Robots with ANNs will not be mainstream, as firstly they need time to learn. Whereas programmed robots can just be factory equipped with all the information they need, ANN robots need to learn like a human child. This is time and cost ineffective."
It's funny that you mention Hebb's rule and ANN as being unique and non-programmable because that is the same way that AI learns today. You speak like it will take years for an AI to go through very advanced ANN learning to achieve a considerable level of intelligence when it would take a maximum of days.
Computers are very very fast in dealing with data and I think we already know this. A program can analyze 10,000 pictures in what would take days for a human, but only takes minutes to an hour or two. Analyzing audio takes seconds for Siri to turn audio to text using machine learning and giving the appropriate response. This may not be your high quality Human level responses but it's damn well impressive for our time.
I remember this documentary on the past present and future of computers. They did a topic on how computers and AI can learn. They had a really good example of a story that would do well in this case. Here is the story:
Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set - output "yes" for the 50 photos of camouflaged tanks, and output "no" for the 50 photos of forest. This did not ensure, or even imply, that new examples would be classified correctly. The neural network might have "learned" 100 special cases that would not generalize to any new problem. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.
It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.
ANN is used in many applications we all know and love today, even the ones that take our jobs will most likely use ANN to learn. I suggest you read this paper on Neural networks in games (it also goes a little in the topic of other applications). link
(you have to download it to view it, just a warning. It's not a virus).
I have to stop midway due to a real world distraction, I'll continue later.