日韩福利电影在线_久久精品视频一区二区_亚洲视频资源_欧美日韩在线中文字幕_337p亚洲精品色噜噜狠狠_国产专区综合网_91欧美极品_国产二区在线播放_色欧美日韩亚洲_日本伊人午夜精品

Search

Energy Efficiency

Thursday
15 Dec 2022

The training of deep neural networks entails continuously adapting its configuration, comprised of so-called "weights," to ensure that it can identify patterns in data with increasing accuracy. This p

15 Dec 2022  by https://techxplore.com/   
Deep-learning models have proven to be highly valuable tools for making predictions and solving real-world tasks that involve the analysis of data. Despite their advantages, before they are deployed in real software and devices such as cell phones, these models require extensive training in physical data centers, which can be both time and energy consuming.
 
 
Researchers at Texas A&M University, Rain Neuromorphics and Sandia National Laboratories have recently devised a new system for training deep learning models more efficiently and on a larger scale. This system, introduced in a paper published in Nature Electronics, relies on the use of new training algorithms and memristor crossbar hardware, that can carry out multiple operations at once.
 
"Most people associate AI with health monitoring in smart watches, face recognition in smart phones, etc., but most of AI, in terms of energy spent, entails the training of AI models to perform these tasks," Suhas Kumar, the senior author of the study, told TechXplore.
 
"Training happens in warehouse-sized data centers, which is very expensive both economically and in terms of carbon footprint. Only fully trained models are then downloaded onto our low-power devices."
 
Essentially, Kumar and his colleagues set out to devise an approach that could reduce the carbon footprint and financial costs associated with the training of AI models, thus making their large-scale implementation easier and more sustainable. To do this, they had to overcome two key limitations of current AI training practices.
 
The first of these challenges is associated with the use of inefficient hardware systems based on graphics processing units (GPUs), which are not inherently design to run and train deep learning models. The second entails the use of ineffective and math-heavy software tools, specifically utilizing the so-called backpropagation algorithm.
 
"Our objective was to use new hardware and new algorithms," Kumar explained. "We leveraged our previous 15 years of work on memristor-based hardware (a highly parallel alternative to GPUs), and recent advances in brain-like efficient algorithms (a non-backpropagation local learning technique). Though advances in hardware and software existed previously, we codesigned them to work with each other, which enabled very power efficient AI training."
 
The training of deep neural networks entails continuously adapting its configuration, comprised of so-called "weights," to ensure that it can identify patterns in data with increasing accuracy. This process of adaptation requires numerous multiplications, which conventional digital processors struggle to perform efficiently, as they will need to fetch weight-related information from a separate memory unit.
 
"Nearly all training today is performed using the backpropagation algorithm, which employs significant data movement and solving math equations, and is thus suited to digital processors," Suin Yi, lead author of the study, told TechXplore.
 
"As a hardware solution, analog memristor crossbars, which emerged within the last decade, enable embedding the synaptic weight at the same place where the computing occurs, thereby minimizing data movement. However, traditional backpropagation algorithms, which are suited for high-precision digital hardware, are not compatible with memristor crossbars due to their hardware noise, errors and limited precision."
 
As conventional backpropagation algorithms were poorly suited to the system they envisioned, Kumar, Yi and their colleagues developed a new co-optimized learning algorithm that exploits the hardware parallelism of memristor crossbars. This algorithm, inspired by the differences in neuronal activity observed in neuroscience studies, is tolerant to errors and replicates the brain's ability to learn even from sparse, poorly defined and "noisy" information.
 
"Our algorithm-hardware system studies the differences in how the synthetic neurons in a neural network behave differently under two different conditions: one where it is allowed to produce any output in a free fashion, and another where we force the output to be the target pattern we want to identify," Yi explained.
 
"By studying the difference between the system's responses, we can predict the weights needed to make the system arrive at the correct answer without having to force it. In other words, we avoid the complex math equations backpropagation, making the process more noise resilient, and enabling local training, which is how the brain learns new tasks."
 
The brain-inspired and analog-hardware-compatible algorithm developed as part of this study could thus ultimately enable the energy-efficient implementation of AI in edge devices with small batteries, thus eliminating the need for large cloud servers that consume vast amounts electrical power. This could ultimately help to make the large-scale training of deep learning algorithms more affordable and sustainable.
 
"The algorithm we use to train our neural network combines some of the best aspects of deep learning and neuroscience to create a system that can learn very efficiently and with low-precision devices," Jack Kendall, another author of the paper, told TechXplore.
 
"This has many implications. The first is that, using our approach, AI models that are currently too large to be deployed can be made to fit in cellphones, smartwatches, and other untethered devices. Another is that these networks can now learn on-the-fly, while they're deployed, for instance to account for changing environments, or to keep user data local (avoiding sending it to the cloud for training)."
 
In initial evaluations, Kumar, Yi, Kendall and their colleague Stanley Williams showed that their approach can reduce the power consumption associated with AI training by up to 100,000 times when compared to even the best GPUs on the market today. In the future, it could enable the transfer of massive data centers onto users' personal devices, reducing the carbon footprint associated with AI training, and promoting the development of more artificial neural networks that support or simplify daily human activities.
 
"We next plan to study how these systems scale to much larger networks and more difficult tasks," Kendall added. "We also plan to study a variety of brain-inspired learning algorithms for training deep neural networks and find out which of these have perform better in different networks, and with different hardware resource constraints. We believe this will not only help us understand how to best perform learning in resource constrained environments, but it may also help us understand how biological brains are able to learn with such incredible efficiency."

More News

Loading……
成人激情自拍| 国产在线xxx| 可以在线观看的av| 你懂的好爽在线观看| 日韩二区三区| av成人手机在线| av资源中文在线天堂| 日本精品不卡| jizz18欧美18| 999久久久91| 永久亚洲成a人片777777| 日韩午夜av| 久久99精品国产| 26uuu色噜噜精品一区二区| 国产精品毛片大码女人| 午夜电影久久久| 欧美日高清视频| 黄网站色大毛片| 春暖花开成人亚洲区| gogo久久| 欧美专区一区| 四季av一区二区三区免费观看| 在线观看一区视频| 丁香婷婷综合激情五月色| 亚洲欧洲精品一区二区三区 | 不卡av电影在线播放| 99国产精品国产精品久久| 国产精品进线69影院| 日本精品一级二级| 国外av网站| 国产黄a三级三级三级av在线看| 经典三级一区二区| 日本道不卡免费一区| 久久精品女人天堂| 国产欧美视频在线观看| 欧美在线一区二区| 在线视频国产三级| 欧美亚洲韩国| 99久久精品网站| 国产一区二区三区免费观看| 亚洲免费观看高清完整版在线 | 久久五月精品| 综合中文字幕| 校园激情久久| 国产欧美日韩中文久久| 欧美理论片在线| 国产大片在线免费观看| www一区二区三区| 精品动漫av| 国产精品妹子av| 色综合小说天天综合网| 国产丝袜在线观看视频| 日韩激情毛片| 国产美女在线观看一区| 狠狠色狠色综合曰曰| 日本中文字幕一区二区有码在线| 久久xxx视频| 一本一道久久综合狠狠老精东影业| 国产清纯美女被跳蛋高潮一区二区久久w| 欧美日韩中字一区| 思思99re6国产在线播放| 国产日韩三级| 国产伦精一区二区三区| 欧美综合色免费| 免费在线看a| av亚洲免费| 久久久久久久久久看片| 精品福利视频一区二区三区| 鲁鲁在线中文| 精品动漫一区| 婷婷国产在线综合| 成人在线免费观看| 国产麻豆精品久久| 久久天天做天天爱综合色| 亚洲jjzzjjzz在线观看| 亚洲伦乱视频| 丝袜诱惑亚洲看片| 色诱视频网站一区| 成人日韩欧美| 中文字幕乱码亚洲无线精品一区 | 国产肉丝袜一区二区| 三上悠亚在线观看二区| 日韩欧美高清一区二区三区| 国产精品香蕉一区二区三区| 日韩一二三四区| 精品123区| 精品一区二区三区不卡| 日韩精品一区二区三区在线观看| 澳门成人av网| 激情小说亚洲一区| 精品国产乱子伦一区| **日韩最新| www.成人在线| 一级香蕉视频在线观看| 亚洲欧美日本伦理| 中文字幕国产一区| 精品999视频| 欧美freesex交免费视频| 欧美日韩国产丝袜另类| 波多一区二区| 久久99这里只有精品| 天天夜夜人人| 群体交乱之放荡娇妻一区二区| 久久九九99视频| 国产一二三区在线视频| 欧美激情自拍| 欧美日韩在线播放三区| 亚洲综合资源| 欧美国产激情一区二区三区蜜月 | 亚洲成人黄色影院| 欧美少妇精品| 成人免费观看视频| 欧美日韩国产亚洲沙发| 亚洲高清久久| 天天摸天天做天天爽| 一区三区在线欧| 亚洲超碰97人人做人人爱| 蜜乳av一区| 国产成人精品1024| 国产一区二区三区福利| 久久国产免费| 天海翼女教师无删减版电影| 日韩av在线中文字幕| 欧美三级三级三级| 成功精品影院| 一区二区三区四区国产精品| 韩国精品一区| 91蜜桃婷婷狠狠久久综合9色| 成人高清免费在线播放| 日产国产高清一区二区三区| 天堂男人av| 国产欧美高清| www污污在线| 免费永久网站黄欧美| 婷婷丁香六月天| 国产欧美在线| 色网视频在线| 日本sm残虐另类| 欧美精品少妇| 韩国理伦片一区二区三区在线播放 | 麻豆国产欧美一区二区三区r| 亚洲国产欧美在线| 日韩中文字幕视频网| 亚洲成人免费电影| 97久久超碰| 在线一区二区三区做爰视频网站| 9999精品| 亚洲一区在线观看视频| 人人爱人人干婷婷丁香亚洲| 亚洲一区二区高清| 国产成人精品福利| 欧美丝袜丝交足nylons图片| 精品久久久亚洲| 日韩三级视频在线观看| 中文精品久久| 在线观看免费毛片| 国产乱人伦偷精品视频不卡| 91cn在线观看| 亚洲欧美中日韩| 日韩欧美中文字幕在线视频| 在线视频中文字幕一区二区| 色综合久久中文| 精品久久久久一区| 久热综合在线亚洲精品| 三区四区在线视频| 国产色产综合色产在线视频| 久久69成人| 欧美亚洲国产一卡| 99国产精品一区二区| 黄页免费在线观看| 成人免费视频app| 国产亚洲精品精品国产亚洲综合| 黑人极品videos精品欧美裸| 亚洲黄页在线观看| 91人成在线| 国产精品乡下勾搭老头1| 成人福利av| 欧美性猛交xxxx黑人交| 亚洲破处大片| av播放在线观看| 中文字幕佐山爱一区二区免费| 欧美人体视频| 神马午夜dy888| 99久久er热在这里只有精品66| 9999精品| 四虎av网址| 国产成人免费视频一区| 国产一区二区主播在线| 欧美久久久久久久久久| 亚洲综合好骚| 僵尸再翻生在线观看| 欧美日韩一区二区三区在线免费观看| 亚洲成人av| 国产剧情在线| 欧美性xxxxxxx| 性久久久久久| 91精品国产经典在线观看| 色aⅴ色av色av偷拍| 成人动漫在线一区| 偷拍精品福利视频导航|