Page 3 of 8

Re: Supercomputing News and Discussions

Posted: Sat Dec 25, 2021 3:54 am
by Yuli Ban
IN 2018, a new supercomputer called Summit was installed at Oak Ridge National Laboratory, in Tennessee. Its theoretical peak capacity was nearly 200 petaflops—that’s 200 thousand trillion floating-point operations per second. At the time, it was the most powerful supercomputer in the world, beating out the previous record holder, China’s Sunway TaihuLight, by a comfortable margin, according to the well-known Top500 ranking of supercomputers. (Summit is currently No. 2, a Japanese supercomputer called Fugaku having since overtaken it.)

In just four short years, though, demand for supercomputing services at Oak Ridge has outstripped even this colossal machine. “Summit is four to five times oversubscribed,” says Justin Whitt, who directs ORNL’s Leadership Computing Facility. “That limits the number of research projects that can use it.”

The obvious remedy is to get a faster supercomputer. And that’s exactly what Oak Ridge is doing. The new supercomputer being assembled there is called Frontier. When complete, it will have a peak theoretical capacity in excess of 1.5 exaflops.

Re: Supercomputing News and Discussions

Posted: Sun Dec 26, 2021 7:12 am
by Yuli Ban
France's Jean Zay supercomputer, one of the most powerful computers in the world and part of the Top500, is now the first HPC to have a photonic coprocessor meaning it transmits and processes information using light. The development represents a first for the industry.

The breakthrough was made during a pilot program that saw LightOn collaborate with GENCI and IDRIS. Igor Carron, LightOn’s CEO and co-founder said in a press release: “This pilot program integrating a new computing technology within one of the world’s Supercomputers would not have been possible without the particular commitment of visionary agencies such as GENCI and IDRIS/CNRS. Together with the emergence of Quantum Computing, this world premiere strengthens our view that the next step after exascale supercomputing will be about hybrid computing.”

The technology will now be offered to select users of the Jean Zay research community over the next few months who will use the device to undertake research on machine learning foundations, differential privacy, satellite imaging analysis, and natural language processing (NLP) tasks. LightOn’s technology has already been successfully used by a community of researchers since 2018.

Supercomputers have come a long way in the past few years. In June of 2018, it was announced that the United States Department of Energy had the world's latest and most powerful supercomputer called Summit.

Summit operated at 200 petaflops while at maximum capacity, achieving 200 quadrillion calculations each second. The numbers at the time outperformed China's Sunway TaihuLight's 93 petaflop capacity as well as the U.S.'s previous record-holder Titan.

Re: Supercomputing News and Discussions

Posted: Sun Jan 02, 2022 6:50 am
by wjfox
University Loses Valuable Supercomputer Research After Backup Error Wipes 77 Terabytes of Data

Thursday 4:30PM

Kyoto University, a top research institute in Japan, recently lost a whole bunch of research after its supercomputer system accidentally wiped out a whopping 77 terabytes of data during what was supposed to be a routine backup procedure.

That malfunction, which occurred sometime between Dec. 14 and Dec. 16, erased approximately 34 million files belonging to 14 different research groups that had been using the school’s supercomputing system. The university operates Hewlett Packard Cray computing systems and a DataDirect ExaScaler storage system—the likes of which can be utilized by research teams for various purposes.

It’s unclear what kind of files were specifically deleted or what caused the actual malfunction, though the school has said that the work of at least four different groups will not be able to be restored.

BleepingComputer, which originally reported on this incident, helpfully points out that supercomputing research is, uh, not super cheap, either—costing somewhere in the neighborhood of hundreds of dollars per hour to operate.

https://gizmodo.com/university-loses-va ... 1848286983

Re: Supercomputing News and Discussions

Posted: Mon Jan 10, 2022 6:38 pm
by weatheriscool
Nanowire transistor with integrated memory to enable future supercomputers
https://techxplore.com/news/2022-01-nan ... uture.html
by Lund University

For many years, a bottleneck in technological development has been how to get processors and memories to work faster together. Now, researchers at Lund University in Sweden have presented a new solution integrating a memory cell with a processor, which enables much faster calculations, as they happen in the memory circuit itself.

In an article in Nature Electronics, the researchers present a new configuration, in which a memory cell is integrated with a vertical transistor selector, all at the nanoscale. This brings improvements in scalability, speed and energy efficiency compared with current mass storage solutions.

The fundamental issue is that anything requiring large amounts of data to be processed, such as AI and machine learning, requires speed and more capacity. For this to be successful, the memory and processor need to be as close to each other as possible. In addition, it must be possible to run the calculations in an energy-efficient manner, not least as current technology generates high temperatures with high loads.

Re: Supercomputing News and Discussions

Posted: Thu Feb 17, 2022 4:06 pm
by wjfox
Image

Re: Supercomputing News and Discussions

Posted: Mon Mar 14, 2022 7:19 pm
by Tadasuke
wjfox wrote: Thu Feb 17, 2022 4:06 pm graph
This is wrong, wjfox. $1000 PC (whether taking inflation into consideration or not) in 2020 can have multiple teraflops, which is 10^12. RTX 3070 with $499 MSRP has 20.31*10^12 flops, RTX 3070 Max-Q (laptop) with TDP of 80 watts has 13.21 teraflops (13.21*10^12). And TOP500 supercomputers are counted in fp64, meaning double precision, while consumer hardware is counted in fp32, meaning single precision. So you need to double supercomputers processing speed numbers to compare them to PCs. Japanese Fugaku is already around 1 exaflops fp32. American Aurora will be around 4 exaflops fp32, later this year (Intel). Kurzweil predicted 20 petaflops, 10 TB RAM $1000 laptops in 2023. It doesn't come true, but there is substantial progress nonetheless.

I recommend watching this first:

and then this:
to understand more about what Intel plans for the near future of supercomputers

Re: Supercomputing News and Discussions

Posted: Tue Mar 15, 2022 4:56 pm
by Nero
Tadasuke wrote: Mon Mar 14, 2022 7:19 pm
wjfox wrote: Thu Feb 17, 2022 4:06 pm graph
This is wrong, wjfox. $1000 PC (whether taking inflation into consideration or not) in 2020 can have multiple teraflops, which is 10^12. RTX 3070 with $499 MSRP has 20.31*10^12 flops, RTX 3070 Max-Q (laptop) with TDP of 80 watts has 13.21 teraflops (13.21*10^12). And TOP500 supercomputers are counted in fp64, meaning double precision, while consumer hardware is counted in fp32, meaning single precision. So you need to double supercomputers processing speed numbers to compare them to PCs. Japanese Fugaku is already around 1 exaflops fp32. American Aurora will be around 4 exaflops fp32, later this year (Intel). Kurzweil predicted 20 petaflops, 10 TB RAM $1000 laptops in 2023. It doesn't come true, but there is substantial progress nonetheless.

I recommend watching this first:

and then this:
to understand more about what Intel plans for the near future of supercomputers
More recently in 2021/22 we have the 3090 TI which is about 40 teraflops compute and later this year we will have the 4000 series which will likely be 60-80 teraflops in classic compute. The graph does need amending I agree because it is very likely we will have a petaflop desktop GPU before 2030.

Re: Supercomputing News and Discussions

Posted: Wed Mar 16, 2022 8:17 am
by Tadasuke
Nero wrote: Tue Mar 15, 2022 4:56 pmMore recently in 2021/22 we have the 3090 TI which is about 40 teraflops compute and later this year we will have the 4000 series which will likely be 60-80 teraflops in classic compute. The graph does need amending I agree because it is very likely we will have a petaflop desktop GPU before 2030.
I think that wjfox took data for $1000 PCs that was measured in megaflops and pasted in onto supercomputer fp64 flops data. A $1000 2000 PC was over 1 gigaflops if you count the graphics card. Top 2000 GPUs were 8 gigaflops without overclocking (old GPUs and CPUs could overclock by 30%) and they were cheaper than today's. RTX 4070 will probably be $599 and 80% faster than RTX 3070. RTX 3090 Ti or 4090 Ti aren't GPUs for a $1000 PC, just like the 3080 Ti isn't a GPU for a $1000 laptop. A 2020 desktop PC is about 5000x faster than a 2000 PC in terms of flops. PS5 had about 1000x of real performance improvement over the PS2 and a new smartwatch is faster than the PS2 and has more memory.

Re: Supercomputing News and Discussions

Posted: Wed Mar 16, 2022 9:28 am
by Tadasuke
I have a graph showing Linpack performance (floating-point double precision) of the 500th supercomputer from the TOP500 list of fastest supercomputers (teraflops = 10^12 flops). I think this represents real progress better than the #1. As you can see, improvement is substantial - about 32x during a decade, meaning Linpack performance doubles every 2 years. It may continue like that, so in 2031 performance will be 1024x higher than in 2011.

Image

Re: Supercomputing News and Discussions

Posted: Wed Mar 16, 2022 2:49 pm
by wjfox
Tadasuke wrote: Wed Mar 16, 2022 8:17 am
I think that wjfox took data for $1000 PCs that was measured in megaflops and pasted in onto supercomputer fp64 flops data.
Oh, I didn't create the graph. It's hot-linked from Wikipedia.