Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

AI & Robotics News and Discussions

artificial intelligence machine learning deep learning robots ASIMO Singularity Boston Dynamics automation AI robotics

  • Please log in to reply
3378 replies to this topic

#3341
Sciencerocks

Sciencerocks

    Member

  • Banned
  • PipPipPipPipPipPipPipPipPipPipPip
  • 13,326 posts
Researchers achieve 100 percent recognition rates for half and three-quarter faces

by University of Bradford

Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.

 

Using artificial intelligence techniques, the team achieved 100 per cent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: "The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well."

The team used a machine learning technique known as a 'convolutional neural network', drawing on a feature extraction model called VGG—one of the most popular and widely used for facial recognition.

 

https://techxplore.c...ee-quarter.html


  • Casey likes this

#3342
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,582 posts
  • LocationNew Orleans, LA

AI develops human-like number sense – taking us a step closer to building machines with general intelligence

One really exciting thing about this research is that it shows that our current principles of learning are quite fundamental. Some of the most high-level aspects of thinking that people and animals demonstrate are related deeply to the structure of the world, and our visual experience of that.
It also hints that we might be on the right track to achieve a more comprehensive, human-level artificial intelligence. Applying this kind of learning to other tasks – perhaps applying it to signals that occur over a period of time rather than over pixels in an image – could yield machines with even more human-like qualities. Things we once thought fundamental to being human – musical rhythm for example, or even a sense of causality – are now being examined from this new perspective.
As we continue to discover more about building artificial learning techniques, and find new ways to understand the brains of living organisms, we unlock more of the mysteries of intelligent, adaptive behaviour.


And remember my friend, future events such as these will affect you in the future.


#3343
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,441 posts
  • LocationLondon
This AI-generated Joe Rogan fake has to be heard to be believed
 
The most realistic AI voice clone we’ve heard
 
By James Vincent  May 17, 2019, 7:28am EDT
 
 
 
UZVniZS.jpg

  • Yuli Ban likes this

#3344
Sciencerocks

Sciencerocks

    Member

  • Banned
  • PipPipPipPipPipPipPipPipPipPipPip
  • 13,326 posts
Researchers try to recreate human-like thinking in machines

by Ingrid Fadelli , Tech Xplore

Researchers at Oxford University have recently tried to recreate human thinking patterns in machines, using a language guided imagination (LGI) network. Their method, outlined in a paper pre-published on arXiv, could inform the development of artificial intelligence that is capable of human-like thinking, which entails a goal-directed flow of mental ideas guided by language.

 

Human thinking generally requires the brain to understand a particular language expression and use it to organize the flow of ideas in the mind. For instance, if a person leaving her house realizes that it's raining, she might internally say, "If I get an umbrella, I might avoid getting wet," and then decide to pick up an umbrella on the way out. As this thought goes through her mind, however, she will automatically know what the visual input (i.e. raindrops) she observes means, and how holding an umbrella could prevent getting wet, perhaps even imagining the feeling of holding the umbrella or getting wet under the rain.

 

https://techxplore.c...e-machines.html


  • Yuli Ban likes this

#3345
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,441 posts
  • LocationLondon

DeepMind Can Now Beat Us at Multiplayer Games, Too

May 30, 2019

Capture the flag is a game played by children across the open spaces of a summer camp, and by professional video gamers as part of popular titles like Quake III and Overwatch.

In both cases, it’s a team sport. Each side guards a flag while also scheming to grab the other side’s flag and bring it back to home base. Winning the game requires good old-fashioned teamwork, a coordinated balance between defense and attack.


In other words, capture the flag requires what would seem to be a very human set of skills. But researchers at an artificial intelligence lab in London have shown that machines can master this game, too, at least in the virtual world.

In a paper published on Thursday in Science (and previously available on the website arXiv before peer review), the researchers reported that they had designed automated “agents” that exhibited humanlike behavior when playing the capture the flag “game mode” inside Quake III. These agents were able to team up against human players or play alongside them, tailoring their behavior accordingly.

Read more: https://www.nytimes....telligence.html

 

 

qot3hYb.jpg


  • Casey and Yuli Ban like this

#3346
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,582 posts
  • LocationNew Orleans, LA

Less Like Us: An Alternate Theory of Artificial General Intelligence

The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.
Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”
But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.
Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.
Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.
This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.
Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.
With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.


And remember my friend, future events such as these will affect you in the future.


#3347
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,441 posts
  • LocationLondon
World's first raspberry-picking robot to debut in 2020
 
3rd June 2019
 
A new robotic fruit-picking system will be capable of outpacing humans when a commercial version launches next year.
 
 
 
1630-raspberry-picking-robot.jpg

  • Yuli Ban and funkervogt like this

#3348
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,441 posts
  • LocationLondon
So... AI will kill us. Just in a different way than we expected! 
 
---
 
 
Training a single AI model can emit as much carbon as five cars in their lifetimes
 
Deep learning has a terrible carbon footprint.
 
by Karen Hao
Jun 6, 2019
 
 
 
zHmItKM.jpg

  • Yuli Ban and funkervogt like this

#3349
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,010 posts
  • LocationRaleigh, NC

 

So... AI will kill us. Just in a different way than we expected! 
 
 

 

I'm sure ultimately A.I. will be powered by renewable power and nuclear energy in its various forms.


What are you without the sum of your parts?

#3350
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,073 posts
Microsoft Research has made a big advance on the GLUE benchmark, reaching human-level performance:

https://gluebenchmark.com/leaderboard

This is an set of NLP (Natural Language Processing) benchmarks. One of them is WNLI, which is a type of Winograd Schemas challenge dataset. It's not exactly the same as the ones used in the official Winograd Schemas Challenge (with its 273 sentences), but it is very close. A couple months ago, the best any system could do on these types of problems is a little more than 70% accurate. Microsoft Research has now achieved 89% accuracy! -- that is a quantum leap in performance on this type of "commonsense reasoning" challenge.

This doesn't necessarily mean that Microsoft's system is actually doing commonsense reasoning -- but whatever it is doing, it's working well enough to fake commonsense reasoning for this specific type of test.

....

See this Tweet thread:

https://twitter.com/...780477191380994

Sam Bowman writes:
 

There's no new paper out, so I don't know much about what's changed. The WNLI results are quite surprising.


Miles Brundage asks:
 

Very curious about that part! Am I interpreting the task right in seeing this as very correlated with other (partial/full) WSC evaluations? /what is the rough gap btwn them (i.e. this like a dozens of points jump or a few)?


Bowman replies:
 

Not totally sure what you mean. This is a huge jump on our private WNLI data, which is very similar to WSC273 in quality/source/difficulty, but slightly different in format. I don't recall seeing numbers on WSC273 anywhere that would suggest this level of progress...


Once again: just a few months ago, the best any team could do was 65% to 70% accuracy -- so, there has been a leap from 65% --> 89%.
  • Casey, Yuli Ban and funkervogt like this

#3351
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,975 posts
  • LocationLondon

Sorry, but to understand clearly:

  • ​WNLI IS a test of an AI's ability to understand text and reason about it using "common sense" logic (see below)
  • Human success on this test is 95%
  • Previously we have been stuck below 65% for machine learning, but suddenly in the last few months, this has shot up to 89%, although Microsoft (and Alibaba who have hit 80.8%) have not published papers so we don't yet know how they have achieved this score? 

A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd

 

The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators

Have I understood this right?


  • Casey, Yuli Ban and starspawn0 like this

#3352
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,582 posts
  • LocationNew Orleans, LA

 

So... AI will kill us. Just in a different way than we expected! 
 
---
 
 
Training a single AI model can emit as much carbon as five cars in their lifetimes
 
Deep learning has a terrible carbon footprint.
 
by Karen Hao
Jun 6, 2019
 
 
 
zHmItKM.jpg

 

Perhaps the Singularity and becoming a Type 1 civilization are directly correlated because you need to produce 174 petawatts of energy to run superhuman-level AI.


And remember my friend, future events such as these will affect you in the future.


#3353
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,073 posts
Amazon Announces Alexa Conversations, Experiences Across Skills

https://variety.com/...ons-1203234290/

Alexa head scientist Rohit Prasad demonstrated the assistant’s new conversational capabilities, dubbed Alexa Conversations, with a video on stage. The video showed a consumer asking for movies that were playing nearby, settle on a film, find showtimes that matched her schedule, buy the tickets, find a restaurant nearby and reserve a table, all without ever having to break the flow of her conversation with Alexa.

“We imagine a future where you would be able to naturally converse with Alexa,” said Prasad. “This is a big leap for conversational AI.”


  • Yuli Ban likes this

#3354
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,073 posts
US billionaire gives Oxford University £150m

https://www.bbc.com/...cation-48681893

Mr Schwarzman told the BBC he was giving the money to Oxford because artificial intelligence was the major issue of our age.

"At the moment, most governments are utterly unprepared to deal with this, and why would they be, it's a different type of technology," he said.


  • Yuli Ban and Alislaws like this

#3355
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,582 posts
  • LocationNew Orleans, LA

Cobalt Robotics raises $35 million for security robots that patrol offices, warehouses, and parking lots

Cobalt Robotics, a three-year-old startup building indoor security robots designed to work alongside human guards, today revealed that it’s secured $35 million in series B financing led by global investment firm Coatue. The capital infusion builds on a $13 million series A round in March 2018, bringing the San Mateo, California-based company’s total raised to nearly $50 million.
 
Cobalt’s five-foot cone-shaped, touchscreen- and LED-touting robots patrol workspaces along designated routes, leveraging AI and machine learning to spot anomalies like open doors, environmental risks, and intruders and call for human assistance when necessary. They’re capable of traversing both brightly and dimly lit environments thanks to over 60 sensors (including a 360-degree RGB sensor, infrared sensors, a thermal camera, depth and ultrasonic sensors, lidar, and environmental sensors), and they integrate with existing security platforms and a cloud-based performance monitoring dashboard.

OahrfHx.jpg


  • wjfox likes this

And remember my friend, future events such as these will affect you in the future.


#3356
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 9,569 posts

This Robot Fish Has"Blood" That Doubles as Muscle

 

http://blogs.discove...as-its-muscles/

 

Extract:

 

(Discover) When it comes to designing better gizmos, efficiency is the name of the game. Why have two separate components to do two separate tasks, if you can have one do both?

 

…A team of engineers at Cornell and the University of Pennsylvania have created a soft fish-shaped robot whose blood does double duty as both muscles and battery. It can operate autonomously for over a day and a half, and even swims upstream at a respectable speed. The invention’s success is a boon not just for aquatic bot enthusiasts, but also for engineers looking for ways to improve efficiency in future robot designs — and anyone who’d benefit from this new and improved kind of liquid battery.

 

…To show this doubling up of battery and hydraulics could work, the team designed and built a mechanical, autonomous robot fish — specifically, a lionfish. (“This concept can be generalized to other machines and robots,” they make clear.) Its flowing liquids simultaneously provide power to the fish’s pumps and electronics, while transmitting mechanical work to push around the fins, which lets it swim. Usually such liquid batteries use hard, rigid materials, but the authors chose a soft silicone body for the fish robot, which made it more flexible and better suited for its aquatic environment.

 

The finished product can swim unaided for over 36 hours, at more than 1.5 body lengths per minute upstream. And, the authors proudly add, “The robot can also fan its pectoral fins, a behavior that lionfish use to communicate.”

 

Robot-Fish-Blood-1024x683.jpg

 

(Credit: James Pikul)


  • wjfox and Yuli Ban like this

The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#3357
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,441 posts
  • LocationLondon
Endless AI-generated spam risks clogging up Google’s search results
 
Jul 2, 2019
 
Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.
 
Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.
 
Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist.
 

  • Yuli Ban and Alislaws like this

#3358
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,582 posts
  • LocationNew Orleans, LA

Robots Have a Hard Time Grasping These "Adversarial Objects"

There’s been a bunch of research recently into adversarial images, which are images of things that have been modified to be particularly difficult for computer vision algorithms to accurately identify. The idea is that these kinds of images can be used to help design more robust computer vision algorithms, because their “adversarial” nature is sort of a deliberate worst-case scenario—if your algorithm can handle adversarial images, then it can probably handle most other things.
Researchers at UC Berkeley have been extending this concept to robot grasping, with physical adversarial objects carefully designed to be tricky for conventional robot grippers to pick up. All it takes is a slight tweak to straightforward three-dimensional shapes, and a standard two-finger will have all kinds of trouble finding a solid grasp.
The key to these adversarial objects is that they look easy to grasp, but at least for a two-finger (parallel-jaw) gripper, they’re not. The difference between what the objects look like and what their actual geometries are is subtle: In one of the examples, you can see a cube with some shallow pyramids on three of the six sides—the smallest pyramid has a slope of just 10 degrees. The side opposite each pyramid is a regular, flat face, and the result is that there are no directly opposing faces on the cube. This causes problems for two-finger grippers, which work by pinching things, and if you’re trying to pinch against an angled surface, the force you exert will tend to cause the object to twist, often leading to a failed grasp.


And remember my friend, future events such as these will affect you in the future.


#3359
rajamanickam

rajamanickam

    New Member

  • Members
  • Pip
  • 4 posts

A machine learning algorithm was able to make completely new scientific discoveries  just by going through the old scientific papers.
Humans can not read all the research papers.  But machine learning is not having that limitation. So the AI is able to do new scientific discovery just by seeing the connection between the Papers which the Humans miss to see.

Researchers at the U.S. Department of Energy’s  Berkeley  Lab have shown that an algorithm with no training in materials science can scan the text of millions of papers and uncover new scientific knowledge.

 

Watch this Video for more details.


  • caltrek likes this

#3360
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,073 posts
ALIBABA AI BEATS HUMANS IN READING-COMPREHENSION TEST

https://www.alizila....ion-test-again/

Pretty impressive stuff. This means virtual assistants will get better and better at answering your questions. Probably Microsoft could already make Bing work really, really well, given their in-house talent; the fact that they don't likely means they are juggling many competing interests, including optimizing revenue from advertisers, while minimizing the amount of money they pay on compute for their "free service".
  • wjfox, Casey, Yuli Ban and 1 other like this





Also tagged with one or more of these keywords: artificial intelligence, machine learning, deep learning, robots, ASIMO, Singularity, Boston Dynamics, automation, AI, robotics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users