Re: Computers & the Internet News and Discussions
Posted: Wed Sep 28, 2022 6:55 am
A community of futurology enthusiasts
https://www.futuretimeline.net/forum/
https://www.futuretimeline.net/forum/viewtopic.php?f=19&t=13
Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on "edge devices" that work independently from central computing resources.
Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user's writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab have developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory-efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
America's fastest internet has become faster. The Department of Energy's (DOE) dedicated science network, ESnet (Energy Science Network), has been upgraded to ESnet6, boasting a staggering bandwidth of 46 Terabits per second (Tbps). Before you get any ideas, hold up. For now, it's strictly for scientists.
Reservoir computing (RC) is an approach for building computer systems inspired by current knowledge of the human brain. Neuromorphic computing architectures based on this approach are comprised of dynamic physical nodes, which combined can process spatiotemporal signals.
Researchers at Tsinghua University in China have recently created a new RC system based on memristors, electrical components that regulate the flow of electrical current in a circuit, while also recording the amount of charge that previously flowed through it. This RC system, introduced in a paper published in Nature Electronics, has been found to achieve remarkable results, both in terms of performance and efficiency.
"The basic architecture of our memristor RC system comes from our earlier work published in Nature Communications, where we validated the feasibility of building analog reservoir layer with dynamic memristors," Jianshi Tang, one of the researchers who carried out the study, told TechXplore. "In this new work, we further build the analog readout layer with non-volatile memristors and integrate it with the dynamic memristor array-based parallel reservoir layer to implement a fully analog RC system."
The RC system created by Tang and his colleagues is based on 24 dynamic memristors (DMs), which are connected into a physical reservoir. Its read-out layer, on the other hand, is comprised of 2048x4 non-volatile memristors (NVMs).
"Each DM in the DM-RC system is a physical system with computing power (called a DM node), which can generate rich reservoir states through a time-multiplexing process," Tang explained. "These reservoir states are then directly fed into the NVM array for multiply-accumulate (MAC) operations in the analog domain, resulting in the final output."
A team of researchers with members from several institutions in Denmark, Sweden and Japan has developed a means for sending 1.84 petabits of data per second via a fiber-optic cable over 7.9 km. Their report is published in Nature Photonics.
As applications used across the internet mature, moving ever larger amounts of data has become a critical issue. In this new effort, the researchers have developed a single chip that is capable of handling nearly two petabits of data per second.
The chip the researchers built and demonstrated is based on the use of photonics rather than electronics. To transfer huge amounts of data quickly, they added technology to their chip that first splits an incoming data stream (from a laser) into 37 individuals lines that travel across individual threads in a fiber cable. But prior to sending, the data in each of the 37 streams was split into 223 individual chunks of data, each corresponding to a unique part of the optical spectrum.
This, the researchers noted, allowed for the creation of a frequency comb, by which data was transmitted in different colors through the fiber cable. In addition to transferring huge amounts of data quickly, it also prevents the data streams from interfering with each other. The researchers then put their chip into an optical processing device, which they describe as about the size of a matchbox—they describe the result as a "massively parallel space-and-wavelength multiplexed data transmission" system.
https://www.businessinsider.com/bendabl ... ex-2022-12A new bendable computer screen lets you take matters into your hands, literally, if you want a curved display.
Gaming hardware company Corsair unveiled its newest computer monitor earlier this month, the Xeneon Flex. It's the latest addition to its line of gaming monitors which is now available purchase for a whopping $2,000.
The 45-inch OLED screen can be bent at a curve up to 800 millimeters by pulling the handles on each side forward, giving you the option to switch between a flat and curved display. Curved displays can make for a more immersive gaming experience, though some people also prefer them when working.
"All things are numbers," avowed Pythagoras. Today, 25 centuries later, algebra and mathematics are everywhere in our lives, whether we see them or not. The Cambrian-like explosion of artificial intelligence (AI) brought numbers even closer to us all, since technological evolution allows for parallel processing of a vast amounts of operations.
Progressively, operations between scalars (numbers) were parallelized into operations between vectors, and subsequently, matrices. Multiplication between matrices now trends as the most time- and energy-demanding operation of contemporary AI computational systems. A technique called "tiled matrix multiplication" (TMM) helps to speed computation by decomposing matrix operations into smaller tiles to be computed by the same system in consecutive time slots. But modern electronic AI engines, employing transistors, are approaching their intrinsic limits and can hardly compute at clock-frequencies higher than ~2 GHz.
The compelling credentials of light—ultrahigh speeds and significant energy and footprint savings—offer a solution. Recently a team of photonic researchers of the WinPhos Research group, led by Prof. Nikos Pleros from the Aristotle University of Thessaloniki, harnessed the power of light to develop a compact silicon photonic computer engine capable of computing TMMs at a record-high 50 GHz clock frequency.
A 3D mesh is a three-dimensional object representation made of different vertices and polygons. These representations can be very useful for numerous technological applications, including computer vision, virtual reality (VR) and augmented reality (AR) systems.
Researchers at Florida State University and Rutgers University have recently developed Wi-Mesh, a system that can create reliable 3D human meshes, representations of humans that can then be used by different computational models and applications. Their system was presented at the Twentieth ACM Conference on Embedded Networked Sensor Systems (ACM SenSys '22), a conference focusing on computer science research.
"Our research group specializes in cutting-edge wi-fi sensing research," Professor Jie Yang at Florida State University, one of the researchers who carried out the study, told Tech Xplore. "In previous work, we have developed systems that use wi-fi devices to sense a range of human activities and objects, including large-scale human body movements, small-scale finger movements, sleep monitoring, and daily objects. Our E-eyes and WiFinger systems were among the first to use wi-fi sensing to classify various types of daily activities and finger gestures, with a focus on predefined activities using a trained model."
The key objective of the recent work by Professor Yang and his colleagues was to assess whether wi-fi devices that are commonly used for communications could also help to construct 3D human meshes. A 3D human mesh represents the surface of a human body in three-dimensions, capturing different people's heights, weights, somatotypes, body proportions and articulation-induced body deformations.
"3D human meshes have numerous applications, including VR/AR content creation, virtual try-on, and exercise monitoring, and are a fundamental building block for various downstream tasks, such as animation, clothed human reconstruction, and rendering," Professor Yang explained.
Concordia researchers have developed a new technique that can help create high-quality, accurate 3D models of large-scale landscapes—essentially, digital replicas of the real world.
While more work is required before the researchers achieve their goal, they recently outlined their new automated method in the journal Scientific Reports. The framework reconstructs the geometry, structure and appearance of an area using highly detailed images taken by aircraft typically flying higher than 30,000 feet.
These large-scale aerial images—usually more than 200 megapixels each—are then processed to produce precise 3D models of cityscapes, landscapes or mixed areas. They can model their appearance right down to the structures' colors.
One semi-surprising benefit of the global downturn in the PC market is it's making DRAM much more affordable. A new report states that companies that purchase memory from manufacturers such as Micron and SK Hynix have reduced their orders, lowering demand. That caused the average selling price of DRAM to fall by 20% in Q1 of 2023 due to excess inventory. Prices are also expected to fall another 10 to 15% in the next quarter. So, you know what to do: Put a note in your calendar now that reads "upgrade RAM," with the date of June 30.
News about the DRAM industry's woes is detailed in a new report from industry analyst Trendforce, which monitors the PC industry for -- what else? -- trends. Its latest dispatch says that Micron and SK Hynix have begun to reduce DRAM production in the face of withering demand. Memory buyers such as system integrators have been reducing orders for the last three quarters, so it's time for a response. Since demand is low, the only way to get prices back up is to reduce supply, according to the report. However, these things take a while to trickle through the global supply chain, so it's unclear when the impact of these actions will be realized at retail. Therefore, it's unclear if prices will continue to fall into Q3.
Read more here: https://www.iflscience.com/italy-has-j ... gpt-68270(IFL Science) Italy has taken action to ban ChatGPT over alleged privacy violations. On March 31, the Italian Data Protection Authority said the AI chatbot would be temporarily blocked in the country "with immediate effect" and they will be investigating the company behind the technology, OpenAI.
In a press release, the agency alleges that OpenAI is violating the European Union’s privacy law, known as GDPR, as it has no legal basis to justify the “massive collection” of personal data that's used to train the AI.
They also argue that OpenAI doesn’t provide enough information about how much data it collects. Furthermore, they cite concerns that ChatGPT doesn't have any age verification so children risk being exposed to content that is “absolutely inappropriate to their age and awareness.”
In case you’ve been living under a rock for the past few months, ChatGPT is an artificial intelligence (AI) chatbot that’s been designed to respond to text in a conversational way. The technology is based on a large language model, a deep learning algorithm that can understand, process, predict, and generate text based on knowledge gained from massive datasets.
Through its deep understanding of language patterns, it’s able to pull off some incredible feats of apparent intelligence. People have already been using the tool to write computer code and some media outlets even use it to write their news articles (with mixed results). It’s so smart it was even capable of passing a US medical licensing exam.
And this is why AGI will not be developed in Europe.caltrek wrote: ↑Sat Apr 01, 2023 7:15 pm Italy Has Just Banned ChatGPT
by Tom Hale
March 31, 2023
Introduction:Read more here: https://www.iflscience.com/italy-has-j ... gpt-68270(IFL Science) Italy has taken action to ban ChatGPT over alleged privacy violations. On March 31, the Italian Data Protection Authority said the AI chatbot would be temporarily blocked in the country "with immediate effect" and they will be investigating the company behind the technology, OpenAI.
In a press release, the agency alleges that OpenAI is violating the European Union’s privacy law, known as GDPR, as it has no legal basis to justify the “massive collection” of personal data that's used to train the AI.
They also argue that OpenAI doesn’t provide enough information about how much data it collects. Furthermore, they cite concerns that ChatGPT doesn't have any age verification so children risk being exposed to content that is “absolutely inappropriate to their age and awareness.”
In case you’ve been living under a rock for the past few months, ChatGPT is an artificial intelligence (AI) chatbot that’s been designed to respond to text in a conversational way. The technology is based on a large language model, a deep learning algorithm that can understand, process, predict, and generate text based on knowledge gained from massive datasets.
Through its deep understanding of language patterns, it’s able to pull off some incredible feats of apparent intelligence. People have already been using the tool to write computer code and some media outlets even use it to write their news articles (with mixed results). It’s so smart it was even capable of passing a US medical licensing exam.
Now that both Intel and AMD support DDR5 memory on their latest platforms, it's entering a renaissance period. Like DDR4 before it, initial offerings were low-speed and expensive. As the technology matures, we're starting to see higher-speed kits and prices that are beginning to become rational. Another new phenomenon in the memory world is non-binary kits with capacities of 24GB and 48GB. G.Skill is capitalizing on both trends with its latest overclocked Trident Z5 DDR5 memory kits, capable of speeds up to DDR5-8200. That makes them some of the fastest memory available right now, but these capacities and speeds are not supported by all motherboards.
There was a time not terribly long ago when having even a terabyte of data in a single hard drive was unthinkable. Then, it happened, and data density has continued rocketing upward to this very day when Seagate released its first 22TB 3.5-inch hard drive. The new Seagate IronWolf Pro 22TB is available for purchase, and it's not as spendy as you'd expect. The drive has been heavily discounted from the $599 MSRP to a mere $399.
Even a cheap solid-state drive (SSD) would be much faster than this spinning hard drive, but you can't beat that capacity or the cost, which works out to about $18 per terabyte. The IronWolf Pro 22TB (model ST22000NT001) is filled with helium, which helps to protect the 10 platters and 20 read-write heads. It uses conventional magnetic recording (CMR) for improved reliability versus shingled magnetic recording (SMR). You might recall Western Digital got in hot water a few years ago for quietly swapping to SMR in its NAS hard drives.
At 7200 RPM with 512MB of cache, the new IronWolf has a maximum sustained transfer rate of 285 MB/s. The latest SSDs, on the other hand, can read data at more than 7,000 MB/s. Still, the IronWolf Pro 22TB is a little faster (7.5%) than Western Digital's 22TB drive, launched late last year.
Seagate equipped the new 22TB drive with rotational vibration sensors to reduce noise. It idles at 20 dBA and jumps to 26 dBA during active seeking. That's just a hair quieter than Western Digitial's drive, which hits 32 dBA for active seek. The Seagate drive can get louder during heavy workloads, reaching as high as 34 dBA.