X
Search Filters
Format Format
Subjects Subjects
Subjects Subjects
X
Sort by Item Count (A-Z)
Filter by Count
hardware (8) 8
computer architecture (7) 7
field programmable gate arrays (7) 7
biological neural networks (6) 6
neural networks (6) 6
quantization (6) 6
random access memory (6) 6
deep learning (4) 4
machine learning (4) 4
neurons (4) 4
deep neural network (3) 3
engineering, electrical & electronic (3) 3
fault tolerance (3) 3
memory management (3) 3
parallel processing (3) 3
restricted boltzmann machines (3) 3
robustness (3) 3
system-on-chip (3) 3
accelerator (2) 2
artificial neural networks (2) 2
bandwidth (2) 2
belief networks (2) 2
data models (2) 2
engines (2) 2
error analysis (2) 2
feature extraction (2) 2
kernel (2) 2
power consumption (2) 2
regularization (2) 2
stacking (2) 2
static random access memory (2) 2
training (2) 2
accuracy (1) 1
adders (1) 1
analytical models (1) 1
approximate neural network (1) 1
approximation algorithms (1) 1
architecture (1) 1
area optimization (1) 1
arrays (1) 1
backpropagation (1) 1
binary neural network (1) 1
binary neural networks (1) 1
circuit faults (1) 1
circuits (1) 1
cmos (1) 1
computational modeling (1) 1
computer-leistung (1) 1
couplings (1) 1
deep neural networks (1) 1
degradation (1) 1
digital signal processing (1) 1
dithering (1) 1
error diffusion (1) 1
errors (1) 1
fpga (1) 1
hardware oriented algorithm (1) 1
in-memory processing (1) 1
inductive coupling (1) 1
learning (1) 1
logarithmic quantization (1) 1
logarithmic-quantized neural networks (1) 1
low power (1) 1
maschinelles lernen (1) 1
measurement (1) 1
near-memory processing (1) 1
neural network (1) 1
neuronales netzwerk (1) 1
portable computers (1) 1
processor architecture (1) 1
quantized neural network (1) 1
real-time systems (1) 1
reconfigurable array (1) 1
safety critical (1) 1
servers (1) 1
signal processing (1) 1
signal processing algorithms (1) 1
stand der technik (1) 1
state of the art (1) 1
technology utilization (1) 1
temperature measurement (1) 1
ternary neural networks (1) 1
three-dimensional displays (1) 1
time-domain analysis (1) 1
tolerances (1) 1
very large scale integration (1) 1
more...
Language Language
Publication Date Publication Date
Click on a bar to filter by decade
Slide to change publication date range


IEEE Journal of Solid-State Circuits, ISSN 0018-9200, 04/2018, Volume 53, Issue 4, pp. 983 - 994
A versatile reconfigurable accelerator architecture for binary/ternary deep neural networks is presented. In-memory neural network processing without any... 
near-memory processing | Memory management | Neurons | Random access memory | neural networks | Parallel processing | in-memory processing | Binary neural networks | reconfigurable array | System-on-chip | Biological neural networks | ternary neural networks | ENGINEERING, ELECTRICAL & ELECTRONIC | Neural networks | Power consumption
Journal Article
IEEE Journal of Solid-State Circuits, ISSN 0018-9200, 01/2019, Volume 54, Issue 1, pp. 186 - 196
Journal Article
IEEE Transactions on Circuits and Systems II: Express Briefs, ISSN 1549-7747, 04/2017, Volume 64, Issue 4, pp. 462 - 466
Remarkable hardware robustness of deep learning (DL) is revealed by error injection analyses performed using a custom hardware model implementing parallelized... 
fault tolerance | static random access memory (SRAM) | Computer architecture | Machine learning | restricted Boltzmann machines (RBMs) | low power | Hardware | Robustness | Data models | Circuit faults | Deep learning (DL) | Field programmable gate arrays | ENGINEERING, ELECTRICAL & ELECTRONIC
Journal Article
2018 IEEE International Solid - State Circuits Conference - (ISSCC), ISSN 0193-6530, 02/2018, Volume 61, pp. 216 - 218
A key consideration for deep neural network (DNN) inference accelerators is the need for large and high-bandwidth external memories. Although an architectural... 
Three-dimensional displays | System-on-chip | Stacking | Memory management | Random access memory | Engines
Conference Proceeding
Circuits and Systems, ISSN 2153-1285, 2016, Volume 7, Issue 9, pp. 2132 - 2141
Journal Article
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743, 2017, Volume 10630, pp. 137 - 142
Conference Proceeding
2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), ISSN 1548-3746, 08/2017, Volume 2017-, pp. 116 - 119
The expanding use of deep learning algorithms causes the demands for accelerating neural network (NN) signal processing. For the NN processing, in-memory... 
Neurons | Artificial neural networks | Computer architecture | Signal processing | System-on-chip | Biological neural networks | Engines
Conference Proceeding
Nonlinear Theory and its Applications, IEICE, ISSN 2185-4106, 01/2019, Volume 9, Issue 4, p. 453
We propose “QER”, a novel regularization strategy for hardware-aware neural network training. Although quantized neural networks reduce computation power and... 
Measurement | Training | Accuracy | Power consumption | Neural networks | Hardware | Regularization
Journal Article
Nonlinear Theory and its Applications, IEICE, ISSN 2185-4106, 01/2017, Volume 7, Issue 3, p. 395
Remarkable hardware robustness of deep learning is revealed from an error-injection analysis performed using a custom hardware model implementing parallelized... 
Hardware | Robustness | Error analysis | Safety critical | Belief networks | Machine learning
Journal Article
2017 Symposium on VLSI Circuits, 06/2017, pp. C24 - C25
A versatile reconfigurable accelerator for binary/ternary deep neural networks (DNNs) is presented. It features a massively parallel in-memory processing... 
Neurons | Random access memory | Computer architecture | Artificial neural networks | Very large scale integration | Biological neural networks | Field programmable gate arrays
Conference Proceeding
2017 IEEE International Symposium on Circuits and Systems (ISCAS), 05/2017, pp. 1 - 1
Real-time results obtained from an unsupervised feature extraction system using Restricted Boltzmann Machines (RBMs) implemented on FPGA are presented. The... 
Portable computers | Random access memory | Machine learning | Computer architecture | Feature extraction | Real-time systems | Field programmable gate arrays
Conference Proceeding
2016 IEEE International Symposium on Circuits and Systems (ISCAS), ISSN 0271-4310, 05/2016, Volume 2016-, pp. 357 - 360
A key aspect of constructing highly scalable Deep-learning microelectronic systems is to implement fault tolerance in the learning sequence. Error-injection... 
Analytical models | Fault tolerance | Error analysis | Computer architecture | restricted Boltzmann machines (RBMs) | Hardware | Data models | Field programmable gate arrays | Deep Learning | fault tolerance | Learning | Errors | Circuits | Architecture (computers) | Robustness | Tolerances | Belief networks
Conference Proceeding
2016 International Conference on ReConFigurable Computing and FPGAs (ReConFig), 11/2016, pp. 1 - 6
Deep learning is being widely used in various applications, and diverse neural networks have been proposed. A form of neural network, such as the novel... 
Temperature measurement | Neurons | Neural networks | Computer architecture | Feature extraction | Time-domain analysis | Field programmable gate arrays
Conference Proceeding
Proceedings - 2017 5th International Symposium on Computing and Networking, CANDAR 2017, 04/2018, Volume 2018-, pp. 291 - 297
Conference Proceeding
2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 12/2017, Volume 2018-, pp. 1045 - 1051
Hardware-oriented approaches to accelerate deep neural network processing are very important for various embedded intelligent applications. This paper is a... 
Parallel processing | Hardware | Kernel | Field programmable gate arrays | Biological neural networks | Computer architecture
Conference Proceeding
2017 International Joint Conference on Neural Networks (IJCNN), 05/2017, pp. 2510 - 2516
The convolutional neural network (CNN) is a state-of-the-art model that can achieve significantly high accuracy in many machine-learning tasks. Recently, for... 
Bandwidth | Digital signal processing | Parallel processing | Kernel | Biological neural networks | Field programmable gate arrays
Conference Proceeding
2018 International Conference on Field-Programmable Technology (FPT), 12/2018, pp. 6 - 13
Energy-constrained neural network processing is in high demanded for various mobile applications. Binary neural network aggressively enhances the computational... 
binary neural network | Computational modeling | FPGA | hardware oriented algorithm | approximate neural network | error diffusion | neural network | Biological neural networks | Quantization (signal) | quantized neural network | Signal processing algorithms | Approximation algorithms | Hardware | dithering
Conference Proceeding
2018 IEEE 12th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), 09/2018, pp. 237 - 243
In the remarkable evolution of deep neural network (DNN), development of a highly optimized DNN accelerator for edge computing with both less hardware resource... 
accelerator | Quantization (signal) | Neural networks | area optimization | Random access memory | logarithmic quantization | Hardware | Arrays | machine learning | Adders | deep neural network | Area optimization | Deep neural network | Logarithmic quantization | Machine learning | Accelerator
Conference Proceeding
No results were found for your search.

Cannot display more than 1000 results, please narrow the terms of your search.