Introduction to the Special Section on Heterogeneous Computing Era

Introduction to the Special Section on Heterogeneous Computing Era

Computers and Electrical Engineering 60 (2017) 45–47 Contents lists available at ScienceDirect Computers and Electrical Engineering journal homepage...

344KB Sizes 1 Downloads 116 Views

Computers and Electrical Engineering 60 (2017) 45–47

Contents lists available at ScienceDirect

Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Editorial

Introduction to the Special Section on Heterogeneous Computing Era

Background Heterogeneous computing (HC) is the well-orchestrated and coordinated effective use of a suite of diverse highperformance machines (including parallel machines) to provide super-speed processing for computationally demanding tasks with diverse computing needs. On the one hand, an HC system includes heterogeneous machines, high-speed networks, interfaces, operating systems, communication protocols, and programming environments, all combining to produce a positive impact on ease of use and performance. On the other hand, HC should be distinguished from network computing or highperformance distributed computing, which have generally come to mean either clusters of workstations or ad hoc connectivity among computers using little more than opportunistic load-balancing. HC is a plausible, novel technique for solving computationally intensive problems that have several types of embedded parallelism. HC also helps to reduce design risks by incorporating proven technology and existing designs instead of developing them from scratch. This special section aims to bring, for academics as well as industrial practitioners, a set of articles discussing the recent patents on core topics of heterogeneous computing era, including heterogeneous computing design, application of heterogeneous computing and so on. There were 46 submissions for this special section, of which 12 high quality ones were accepted and 7 of them were selected to be included in the special section. The remaining 5 papers, with topics not exactly fitting the special section’s focus, are published in the Journal’s regular issues. Papers in the special section The first article, “Dynamic replication to reduce access latency based on fuzzy logic system”, authored by Tao Wang proposed a theoretical model of access latency optimization with replication which complements the blank space, and then proposed a well-designed dynamic replication strategy which made up by three algorithms. The motivation is quite straightforward and the experimental results show that the proposed method achieves better performance in comparison with other algorithms in terms of mean job execution time, computing resource usage, amount of data scheduling between clusters and number of replicas. The second article “An improved back propagation neural network prediction model for subsurface drip irrigation system” proposed a crop yield-irrigation amount model based on an improved GA-BP neural network prediction algorithm. The author established the yield-irrigation water model for predicting the maize yield during different irrigation system under subsurface drip irrigation, by using the improved BP neural network which on the basis of GA algorithm. Experimental results show the model with GA-BP algorithm achieves better performance. The proposed model not only can speed up the convergence speed of the network, improve the accuracy of the forecast, but also can describe the relationship between yield and irrigation under subsurface drip irrigation more accurately. In the next article entitled “Cooperative Ant Colony-Genetic Algorithm Based on Spark”, this paper designs the algorithms for solving the traveling salesman problem based on ant colony algorithm on MapReduce and Spark. By using the optimal individual between ant colony algorithm and genetic algorithm, the author combined the nearest neighbor selection strategy with genetic algorithm, which can update each other’s best individual at the end of each iteration. Experimental results show that the proposed method can improve the performance of parallel computing. http://dx.doi.org/10.1016/j.compeleceng.2017.05.026 0045-7906/© 2017 Published by Elsevier Ltd.

46

Editorial / Computers and Electrical Engineering 60 (2017) 45–47

In order to solve the embedding problem of software defined networking (SDN) based virtual data centers, the paper “resource management framework for virtual data center embedding based on software defined networking” presents a novel resource management framework for the embedding problem of virtual data centers. It is a heuristic embedding algorithm based on topological potential and modularity, which is widely used to improve the acceptance ratio and the infrastructure providers’ revenue. In this paper, a dynamic monitoring strategy is proposed to select the virtual data center, which with a high revenue cost ratio and maximize further profit of infrastructure providers. A large number of simulation experiments show that the proposed algorithm can accept more requests with the minimum cost, and can improve the revenues of infrastructure providers. Traditional indicated torque estimation requires high-cost and low-durable sensors to measure cylinder pressure. Alternatively, estimating the indicated torque from the instantaneous crankshaft speed provides promising practical application potentials. The paper, “On-line indicated torque estimation for internal combustion engines using discrete observer”, proposed a discrete sliding model observer (SMO) to on-line estimate the ICE indicated torque from its crankshaft speed fluctuation. Firstly, a crankshaft dynamic model of a six-cylinder ICE was established to describe the interaction between the engine torque and instantaneous speed. Then, the discrete SMO was designed to estimate the indicated torque from the crankshaft model. An experimental validation was conducted by using a 6135 G diesel engine. The analysis results demonstrate that the present discrete SMO can effectively estimate the ICE indicated torque, and hence, can provide great potential for on-line monitoring and control of ICEs in practical applications. Recently the deep learning methods have achieved vast success in many conventional fields, and one of the most popular deep architectures is convolutional neural network (CNN) which sufficiently utilizes partial features of the input images. In “A convolutional neural network based method for event classification in event-driven multi-sensor network”, the author proposed a CNN-based method to improve the event classification accuracy for homogenous multi-sensor networks. The results indicate that this CNN-based classifier outperforms than k Nearest Neighbor (kNN) and Support Vector Machine (SVM) methods on several datasets with a higher accuracy. The bandwidth extension (BWE) algorithm in mobile audio codec standard of China was proposed to improve audio quality in mobile communication. But the computational complexity of the algorithm is too high to implement in mobile devices. By analyzing the BWE algorithm, the paper “A Low Computational Complexity Bandwidth Extension Method for Mobile Audio Coding” discover that the main reason of high computational complexity is the frequently usage of timefrequency transformation. Then the author proposed a low computational complexity scheme, which include algorithm optimization and code optimization. The experiment results show that computation time consumption ratio of BWE module in encoder and decoder are decreased by 4.5 and 14.3 percentage points respectively, without reducing the overall audio codec subjective quality, which is be conductive to the algorithm implement in mobile audio field.

Acknowledgement The guest editors are thankful to our reviewers for their effort in reviewing the manuscripts. We also thank the Editin-Chief for his supportive guidance during the entire process. The special issue is supported by National Natural Science Foundation of China Under Grants No. 61502254, No.71471119 and No.71601125, the Development Program for Distinguished Young Teachers in Higher Education of Guangdong Province (Yq2013147).

Zhigao Zheng Central China Normal University, Wuhan, China Jinming Wen Centre national de la recherche scientifique, France Shuai Liu Inner Mongolia University, China E-mail addresses: [email protected] (Z. Zheng), [email protected] (J. Wen), [email protected] (S. Liu)

Editorial / Computers and Electrical Engineering 60 (2017) 45–47

47

Zhigao Zheng was an Associate Researcher with the National Engineering Research Centre for E-learning and Collaborative & Innovative Centre for Educational Technology at Central China Normal University. He is the guest editor of ACM/Springer Mobile Networks and Applications, Multimedia Tools and Applications, Journal of Intelligent and Fuzzy Systems, Computers and Electrical Engineering, International Journal of Networking and Virtual Organisations. He is also the reviewer of many journals such as IEEE Transactions on Big Data, IEEE Transactions on Industrial Informatics, Journal of Network and Computer Applications, The Journal of Supercomputing, Multimedia Tools and Applications, and some conferences such as SC’16, CCGrid’16, NPC’15 and NPC’16. His research interests include distributed data stream analysis, cloud computing and graph computing. He became a Member of CCF in 2012, Member of ACM in 2012, and Member of IEEE in 2016.

Jinming Wen received his Bachelor degree in Information and Computing Science from Jilin Institute of Chemical Technology, Jilin, China, in 2008, his M.Sc. degree in Pure Mathematics from the Mathematics Institute of Jilin University, Jilin, China, in 2010, and his Ph.D degree in Applied Mathematics from McGill University, Montreal, Canada, in 2015. He was a postdoctoral research fellow at Laboratoire LIP, ENS de Lyon from March 2015 to August 2016. He is currently working as a postdoctoral research fellow at department of Electrical and Computer Engineering, University of Alberta. His research interests are in the areas of lattice reduction with applications in communications, signal processing and cryptography, and sparse recovery. He was a Guest Editor for 4 special issues including one in ACM/Springer Mobile Networks and Applications.

Shuai Liu is currently an associate professor at Inner Mongolia University, China. He serves or is serving as an editor or guest editor for many technical journals. He has published more than 20 papers in Elsevier and Springer journals. His interesting research domains contain fractal application and image processing, computer vision.