creepy or amazing? what do you think?
http://www.chonday.c...riter-automaton
creepy or amazing? what do you think?
http://www.chonday.c...riter-automaton
“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates
Amazing and very creepy.
Amazing how advanced clockworks could be back in the day.
Creepy... just look at it.
Animator for a living, I do things other than gripe about the future! Twitter Youtube Newgrounds Website
I love it! And yes, it's creepy as hell, but to think that they were trying to build amazing things like this back then; it's awesome. Very intricate, surely very time-consuming to even design such a construction. Nice find!
"All scientific advancement due to intellegence overcoming, compensating, for limitations. Can't carry a load, so invent wheel. Can't catch food, so invent spear. Limitations. No limitations, no advancement. No advancement, culture stagnates. Works other way too. Advancement before culture is ready. Disastrous."
There's definitely truth in that...
thanks
“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates
And remember my friend, future events such as these will affect you in the future.
Today I Learned...
Elektro is the nickname of a robot built by the Westinghouse Electric Corporation in its Mansfield, Ohio facility between 1937 and 1939. Seven feet tall (2.1 m), weighing 265 pounds (120.2 kg), humanoid in appearance, he could walk by voice command, speak about 700 words (using a 78-rpm record player), smoke cigarettes, blow up balloons, and move his head and arms. Elektro's body consisted of a steel gear, cam and motor skeleton covered by an aluminum skin. His photoelectric "eyes" could distinguish red and green light. He was on exhibit at the 1939 New York World's Fair and reappeared at that fair in 1940, with "Sparko", a robot dog that could bark, sit, and beg.
source: https://en.wikipedia.org/wiki/Elektro; CC-BY-SA content
That's extremely impressive for a robot built in the 1930s!
Once i saw some robots projected during fascism and nazism, the materials was poor just as coke canes, however they were planned to be autonomous war machines. These creations never become functional, however it is interesting o see that already in the 30 people had concept of robots in part similar to the concept we have today
I love robots and AI. I also love the ancient world and hearing about what our ancestors thought of the future. Take this to its logical conclusion, and you can figure that I have a fascination for sci-tech and sci-fi in olden times.
Some think robots only came into existence in the early 20th century. A few more liberal minded might think that the 18th century would be a good starting point for robotics. In fact, automation and robotic mechanisms have been around for thousands of years. Not only that, but the ancients even had a vague idea of artificial intelligence. It was never going to be as technical and profound as our own, but they certainly entertained the idea of imbuing life into the nonliving.
And remember my friend, future events such as these will affect you in the future.
the videos have greek subtitles,thanks for that yuli ban
Broken Promises & Empty Threats: The Evolution Of AI In The USA, 1956-1996
Artificial Intelligence (AI) is once again a promising technology. The last time this happened was in the 1980s, and before that, the late 1950s through the early 1960s. In between, commentators often described AI as having fallen into “Winter,” a period of decline, pessimism, and low funding. Understanding the field’s more than six decades of history is difficult because most of our narratives about it have been written by AI insiders and developers themselves, most often from a narrowly American perspective. In addition, the trials and errors of the early years are scarcely discussed in light of the current hype around AI, heightening the risk that past mistakes will be repeated. How can we make better sense of AI’s history and what might it tell us about the present moment?
This essay adopts a periodization used in the Japanese AI community to look at the history of AI in the USA. One developer, Yutaka Matsuo, claims we are now in the third AI boom. I borrow this periodization because I think describing AI in terms of “booms” captures well the cyclical nature of AI history: the booms have always been followed by busts. In what follows I sketch the evolution of AI across the first two booms, covering a period of four decades from 1956 to 1996. In order to elucidate some of the dynamics of AI’s boom-and-bust cycle, I focus on the promise of AI. Specifically, we’ll be looking at the impact of statements about what AI one day would, or could, become.
Promises are what linguists call “illocutionary acts,” a kind of performance that commits the promise maker to a “future course of action.” A statement like, “We can make machines that play chess, I promise” has the potential to become true, if the promise is kept. But promises can also be broken. Nietzsche argued over a century ago that earning the right to make promises was a uniquely human problem. Building on that insight, the anthropologist Mike Fortun has explored the important role promises play in the construction of technoscience. AI is no exception. In Booms 1 and 2, the promises about AI were many, rarely kept, and still absolutely essential to its funding, development, and social impacts.
Over the past year, no topic has fascinated me more than the history of artificial intelligence and robotics, especially the drama surrounding the two AI winters.
And remember my friend, future events such as these will affect you in the future.
They didn't have the hardware before for the kind of capabilites we see today.
We don't have the hardware today for the kind of capabilites we promised from the start.
Honda Introduces Smarter 'Asimo' Humanoid Robot
2002-12-12
Honda unveiled on Wednesday an improved version of its two-year-old robot, which can now do much more than ring the famed opening bell at the New York Stock Exchange as it did in February this year. The new model, which Honda plans to begin leasing next month, can greet and recognize people, as well as perform advanced commands such as moving in the direction indicated by reading hand gestures. A small step closer to more functional AI and Robotics
And remember my friend, future events such as these will affect you in the future.
Straight from the '80s.
I'd be interested in seeing if we might be able to bring these old robots (including really old ones, like the Mechanical Turk) to life with modern AI methods.
And remember my friend, future events such as these will affect you in the future.
And remember my friend, future events such as these will affect you in the future.
The Lighthill debate on Artificial Intelligence: "The general purpose robot is a mirage" [This infamous 1973 report led to the first AI Winter]
Lighthill report
The Lighthill report is the name commonly used for the paper "Artificial Intelligence: A General Survey" by James Lighthill, published in Artificial Intelligence: a paper symposium in 1973.
Published in 1973, it was compiled by Lighthill for the British Science Research Council as an evaluation of the academic research in the field of artificial intelligence. The report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "In no part of the field have the discoveries made so far produced the major impact that was then promised".
It "formed the basis for the decision by the British government to end support for AI research in all but three universities"—Edinburgh, Sussex and Essex. While the report was supportive of research into the simulation of neurophysiological and psychological processes, it was "highly critical of basic research in foundational areas such as robotics and language processing". The report stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real world domains. That is, the report states that AI techniques may work within the scope of small problem domains, but the techniques would not scale up well to solve more realistic problems. The report represents a pessimistic view of AI that began after early excitement in the field.
The Science Research Council's decision to invite the report was partly a reaction to high levels of discord within the University of Edinburgh's Department of Artificial Intelligence, one of the earliest and biggest centres for AI research in the UK.
And remember my friend, future events such as these will affect you in the future.
From Leonardo Da Vinci’s android to a French-made artificial duck, learn more about seven early mechanical wonders...
Pre-modern automatons are just so fascinating!
And remember my friend, future events such as these will affect you in the future.
Some early recognition of deep learning on Reddit:
Deep Learning Success: Multisensory Learning & Complex Creative Tasks [February 27th, 2008] (Dead link, no Reddit discussion)
Visual Perception with Deep Learning by Yann LeCun [June 24th, 2008] (No Reddit discussion)
Deep learning of generative models with layered Restricted Boltzman Machines by Geoffrey Hinton [August 13th, 2008]
Ask compsci: Anyone have experience with/advice for finding nearest neighbors in a high dimensional (around 80-d) space? [November 21st, 2008]
Darpa wants to create a "Deep Learning" computer to identify objects in videos. [April 15th, 2009] (No Reddit discussion)
A bit of discussion about deep learning on /r/CompSci [May 7th, 2009]
What do you consider to be the most exciting/innovative ideas in machine learning right now? [August 3rd, 2009]
February 12th, 2010
dwf responds to a question asked to /r/Science: Dear scientists of Reddit: What do you think is the next big milestone in YOUR field?
Computer scientist, machine learning researcher here.
I think deep learning is in a position now to make leaps and bounds forward at a remarkable pace. We know that the human brain features many examples of deep architectures for learning (the visual cortex being a striking example) and that it works remarkably well. In the past five years some very smart people have finally figured out how to train very deep, very complicated learning systems (mostly variations on neural networks).
While I'm skeptical of Kurzweil and his proclamations of the imminence of the singularity, I do think that it's not long until we have human-level computer vision, for example, with systems that are largely learned almost exclusively from unlabeled data.
Same person about a year later: February 25th, 2011
PhD student, almost finished my first year.
General Field: Computer Science
Specifics: Machine Learning/Vision, Neural Networks/"Deep Architectures", Scientific Computing
Former work in Machine Learning applied to Computational Biology (during my MSc; specifically, gene expression cancer diagnostics, in silico gene function prediction and high-throughput microscopy analysis).
I work in machine learning, specifically the neurally inspired flavour which is undergoing something of a renaissance under the moniker of "deep learning". I'm interested in learning systems that automatically discover the "features" in the data they're provided with, rather than hand engineered features that account for most of machine learning's success stories. Better yet, when such features can be automatically learned at multiple levels of abstraction in such a manner that they disentangle the natural factors of variation in the data (in a way that Principal Components Analysis or Independent Components Analysis might do in a very simplistic, toy setting). I'm currently applying these methods to data compression, but am also interested in models of the visual system and other higher cognitive processes.
I'm also heavily involved in the numerical/scientific Python community, helping develop the hugely successful open source scientific computing tool stack built on the Python programming language.
Also fairly knowledgeable about a wide range of mathematics/statistics stuff.
And remember my friend, future events such as these will affect you in the future.
"Whatever happened to voice recognition?" -- article from 2010
Using a mouse or keyboard to control a computer? Don't be silly. In the future, clearly there's only one way computers will be controlled: by speaking to them.
There's only one teeny-tiny problem with this magical future world of computers we control with our voices.
It doesn't work.
Despite ridiculous, order of magnitude increases in computing power over the last decade, we can't figure out how to get speech recognition accuracy above 80% -- when the baseline human voice transcription accuracy rate is anywhere from 96% to 98%!
Speech recognition still isn't perfect:
* Smartphone mics are low-quality, so they don't get very good input, and therefore make occasional errors. Google Home-like speakers + mics are better.
* Noise can affect recognition accuracy (and that 80% figure mentioned above is probably in quiet settings; current accuracy levels are above 95%, even in light noise).
* Accents can throw it off.
* If you use a third-party speech rec app (like one by Samsung built-in to their phones), it won't be very good -- Google's is much better.
* Certain proper nouns and numbers can still throw it off (as they do humans).
Even so, if people from 2010 could skip ahead and see how much better speech rec is in 2019 compared to then, they would declare the problem "solved". Then, after a week or two, they would start to find fault with it -- expectations always reset.
This is how it will be with virtual assistants. They will get much, much better in the next 10 years; but we might not notice, at first, as it will happen gradually. But if we could skip ahead and see those 10 years of improvement all at once, we would be floored by what they are capable of. It would just totally boggle the mind. A week later, we'd say, "Ahh... it can't reason as well as a top scientist about science. There's still a long ways to go."
Charlie Stross expressed this idea in his book Accelerando:
https://www.popsci.c...-about-go-blind
Back on board the Field Circus, Donna the Journalist asks the crew members when they think the Singularity took place. "Four years ago," Pierre suggests. Su Ang votes for 2016. But Boris, the jellyfish drinker, says the entire notion of a Singularity is silly. To him, there's no such thing. Wait a minute, Su Ang responds. Here we are, traveling in a spaceship the size of a soda can. We've left our bodies behind to conserve space and energy so that the laser-sail-powered Field Circus can cruise faster. Our brains have been uploaded and are now running electronically within the tiny spaceship's nanocomputers. The pub is "here," along with other virtual environments, so that we don't go into shock from sensory deprivation. "And you can tell me that the idea of a fundamental change in the human condition is nonsense?"
....
I should point out, though, that Atwood is right that we still don't use voice to do important work like writing papers or coding. But we do use it to interact with virtual assistants in increasing amounts. Probably transcription is being done increasingly using speech recognition, and not by humans. These are not "niche uses".
0 members, 0 guests, 0 anonymous users