Re: Computers & the Internet News and Discussions
Posted: Wed Sep 28, 2022 6:55 am
A community of futurology enthusiasts
https://www.futuretimeline.net/forum/
https://www.futuretimeline.net/forum/viewtopic.php?f=19&t=13
Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on "edge devices" that work independently from central computing resources.
Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user's writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab have developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory-efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
America's fastest internet has become faster. The Department of Energy's (DOE) dedicated science network, ESnet (Energy Science Network), has been upgraded to ESnet6, boasting a staggering bandwidth of 46 Terabits per second (Tbps). Before you get any ideas, hold up. For now, it's strictly for scientists.
Reservoir computing (RC) is an approach for building computer systems inspired by current knowledge of the human brain. Neuromorphic computing architectures based on this approach are comprised of dynamic physical nodes, which combined can process spatiotemporal signals.
Researchers at Tsinghua University in China have recently created a new RC system based on memristors, electrical components that regulate the flow of electrical current in a circuit, while also recording the amount of charge that previously flowed through it. This RC system, introduced in a paper published in Nature Electronics, has been found to achieve remarkable results, both in terms of performance and efficiency.
"The basic architecture of our memristor RC system comes from our earlier work published in Nature Communications, where we validated the feasibility of building analog reservoir layer with dynamic memristors," Jianshi Tang, one of the researchers who carried out the study, told TechXplore. "In this new work, we further build the analog readout layer with non-volatile memristors and integrate it with the dynamic memristor array-based parallel reservoir layer to implement a fully analog RC system."
The RC system created by Tang and his colleagues is based on 24 dynamic memristors (DMs), which are connected into a physical reservoir. Its read-out layer, on the other hand, is comprised of 2048x4 non-volatile memristors (NVMs).
"Each DM in the DM-RC system is a physical system with computing power (called a DM node), which can generate rich reservoir states through a time-multiplexing process," Tang explained. "These reservoir states are then directly fed into the NVM array for multiply-accumulate (MAC) operations in the analog domain, resulting in the final output."
A team of researchers with members from several institutions in Denmark, Sweden and Japan has developed a means for sending 1.84 petabits of data per second via a fiber-optic cable over 7.9 km. Their report is published in Nature Photonics.
As applications used across the internet mature, moving ever larger amounts of data has become a critical issue. In this new effort, the researchers have developed a single chip that is capable of handling nearly two petabits of data per second.
The chip the researchers built and demonstrated is based on the use of photonics rather than electronics. To transfer huge amounts of data quickly, they added technology to their chip that first splits an incoming data stream (from a laser) into 37 individuals lines that travel across individual threads in a fiber cable. But prior to sending, the data in each of the 37 streams was split into 223 individual chunks of data, each corresponding to a unique part of the optical spectrum.
This, the researchers noted, allowed for the creation of a frequency comb, by which data was transmitted in different colors through the fiber cable. In addition to transferring huge amounts of data quickly, it also prevents the data streams from interfering with each other. The researchers then put their chip into an optical processing device, which they describe as about the size of a matchbox—they describe the result as a "massively parallel space-and-wavelength multiplexed data transmission" system.
https://www.businessinsider.com/bendabl ... ex-2022-12A new bendable computer screen lets you take matters into your hands, literally, if you want a curved display.
Gaming hardware company Corsair unveiled its newest computer monitor earlier this month, the Xeneon Flex. It's the latest addition to its line of gaming monitors which is now available purchase for a whopping $2,000.
The 45-inch OLED screen can be bent at a curve up to 800 millimeters by pulling the handles on each side forward, giving you the option to switch between a flat and curved display. Curved displays can make for a more immersive gaming experience, though some people also prefer them when working.
"All things are numbers," avowed Pythagoras. Today, 25 centuries later, algebra and mathematics are everywhere in our lives, whether we see them or not. The Cambrian-like explosion of artificial intelligence (AI) brought numbers even closer to us all, since technological evolution allows for parallel processing of a vast amounts of operations.
Progressively, operations between scalars (numbers) were parallelized into operations between vectors, and subsequently, matrices. Multiplication between matrices now trends as the most time- and energy-demanding operation of contemporary AI computational systems. A technique called "tiled matrix multiplication" (TMM) helps to speed computation by decomposing matrix operations into smaller tiles to be computed by the same system in consecutive time slots. But modern electronic AI engines, employing transistors, are approaching their intrinsic limits and can hardly compute at clock-frequencies higher than ~2 GHz.
The compelling credentials of light—ultrahigh speeds and significant energy and footprint savings—offer a solution. Recently a team of photonic researchers of the WinPhos Research group, led by Prof. Nikos Pleros from the Aristotle University of Thessaloniki, harnessed the power of light to develop a compact silicon photonic computer engine capable of computing TMMs at a record-high 50 GHz clock frequency.