Accepted Manuscript Title: Energy-Aware Sporadic Tasks Scheduling with Shared Resources in Hard Real-Time Systems Authors: Yi-wen Zhang, Cheng Wang, Chang-long Lin PII: DOI: Reference:
S2210-5379(16)30205-0 http://dx.doi.org/doi:10.1016/j.suscom.2017.06.002 SUSCOM 172
To appear in: Received date: Revised date: Accepted date:
23-11-2016 16-5-2017 12-6-2017
Please cite this article as: Yi-wen Zhang, Cheng Wang, Chang-long Lin, Energy-Aware Sporadic Tasks Scheduling with Shared Resources in Hard Real-Time Systems, Sustainable Computing: Informatics and Systemshttp://dx.doi.org/10.1016/j.suscom.2017.06.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Energy-Aware Sporadic Tasks Scheduling with Shared Resources in Hard Real-Time Systems Yi-wen Zhang1, Cheng Wang1, Chang-long Lin1 (1. College of Computer Science and Technology, Huaqiao University, Xiamen, China, 361021) Highlights
1. We consider the sporadic task with shared resources. 2. The problem of the energy aware sporadic task scheduling is considered. 3. The dynamic priority algorithms are presented to solve this problem. 4. The DVS technique and the DMP technique are used to reduce the energy consumption. Abstract: We address the problem of minimizing overall energy consumption for sporadic tasks with shared resources in a real time system. Previous algorithms use the static speed to deal with this problem. However, we have proposed a novel scheduling algorithm, called DSSTS, for the sporadic tasks with shared resources. The DSSTS algorithm schedules the task at the dynamic low speed and the task switches to the high speed when it is blocked by another lower priority task. It assumes that each task is executed with its worst case execution time. For energy efficiency, we have proposed a dynamic reclaiming dynamic speed sporadic tasks scheduling algorithm, called DRDSSTS. The DRDSSTS algorithm is an extension of the DSSTS algorithm. It can reclaim the dynamic slack time generated from the early completion task to adjust to the processor speed. Moreover, it can use the DPM technique to put the processor into a dormant mode to save energy when the processor is in an idle mode. The experimental results show that the DRDSSTS algorithm can reduce the energy consumption up to 2.64%~37.16% over the DSSTS algorithm and it consumes 35.16%~71.15% less energy than that of the DS algorithm. Keywords: sporadic task, energy management, real-time system, shared resource, real-time scheduling 1 Introduction With the development of the information technology, more and more personal computing and communication devices become mobile and portable. Most of them are powered by batteries. The energy is one important issue for optimization in the design and operation of embedded systems. There are two ways to reduce the processor energy consumption. One is the dynamic voltage scaling (DVS) [1], the other is dynamic power management (DPM) [2]. DVS adjusts to the frequency and voltage scaling to reduce the energy consumption. DPM is to shut down the idle device or put it into the dormant mode to reduce the energy consumption. Most of the earlier work focusses on the independent task sets. Aydin et al [1] have studied
the independent periodic task set and proposed a dynamic reclaim algorithm which reclaims the slack time from the completed higher-priority task to reduce the energy consumption. But this work only considers the dynamic power of the processor and it ignores the static power of the processor. For energy efficiency, Jejurikar et al [3] have proposed a critical speed to minimize the system level energy consumption. In other words, it considers the general power model. Above work uses dynamic priority policy to schedule the task set. Niu and Li [4] use the fixed priority policy to schedule the task set and they have proposed a novel algorithm that combines critical strategy and shut-down strategy to reduce the energy consumption. In fact, in many real time applications, tasks are dependent due to shared resources. The low power scheduling in the presence of non-preemptive sections has been considered in [5]. Zhang and Chanson have addressed energy efficient scheduling with non-preemptible sections and proposed the dual speed (DS) algorithm based on the EDF policy [5]. There are two speeds in the DS algorithm. A low speed S L is calculated based on an analysis for an independent task set without considering the effect of blocking. The high speed S H takes the maximum blocking time into account due to non-preemption. Lee et al [6] have extended this work and have proposed a multi-speed algorithm that exploits various speed levels depending on specific blocking situation to minimize the energy consumption. In addition, Jejurikar [7] has proposed the stack based slowdown algorithm that builds upon the optimal feasibility test for non-preemptive systems. This algorithm minimizes the transitions to a higher speed by computing different slowdown factors based on the blocking task. Moreover, the above algorithms overestimate preemption times on non-preemptive tasks which results in the speed of the task higher than necessary. Li et al [8] have proposed individual speed algorithm (ISA), a novel scheme that computes one speed for each individual task in a non-preemptive task set without jeopardizing any task deadlines. Another topic for shared resources is the task synchronization. Jejurikar and Gupta [9] have addressed the problem of computing static and dynamic slowdown factors in the presence of task synchronization and have proposed dual mode (DM) algorithm based on the RM scheduling policy. In addition, the same problem based on EDF scheduling policy is studied in [10]. Moreover, Wu [11] has proposed blocking aware two-speed (BATS) algorithm based on the stack resource policy to synchronization the tasks with shared resources. The BATS algorithm can fit to a non-ideal processor and can achieve more energy savings. Note that the above researches focus on the dependent periodic tasks with shared resources. Few researches focus on the dependent sporadic tasks with shared resources. Horng et al. [12] have proposed a novel algorithm, called DVSSR, to solve the problem of scheduling dependent sporadic tasks with shared resources. But it ignores the static power of the processor and assumes that each task is executed with its worst case execution time. Zhang and Guo [13] have extended this work and have proposed an energy efficient algorithm, called DSTSASR. The DSTSASR algorithm can reclaim the slack time from the early completion task to reduce the energy consumption and it takes the general power model into account. But it assumes that each task will require access to at most one resource at a time. Recently, some work focusses on energy management for multi-core processors system [24-26]. Han et al. [24] have studied both static and dynamic synchronization-aware energy management schemes for a set of periodic real-time tasks that access to shared resources. A suspension mechanism based on the enhanced MSRP resource access protocol and a synchronization-aware task mapping heuristic are proposed to solve the problem of the energy
consumption for multi-core processors system. A multi-objective evolutionary algorithm based on task scheduling approach has been proposed to optimize the performance, energy, and temperature for multi-core processors system [25]. In addition, a triple speed algorithm based on enhanced MSRP resource access protocol has been proposed to the problem of energy-aware real-time task synchronization for multi-core systems [26]. To the best of our knowledge, this is the first work to solve the problem of minimizing overall energy consumption with the dependent sporadic tasks with shared resources while considering a generalized power model and the task can access to more than one resource at a time. We first present a novel scheduling algorithm, called DSSTS, assuming that each task is executed with its worst case execution time. The DSSTS algorithm schedules the task at the dynamic low speed and the task switches to the high speed when it is blocked by another lower priority task. It is based on the SRP resources protocol and EDF scheduling policy and the task can access to more than one resource at a time. In addition, we present a dynamic reclaiming dynamic speed sporadic tasks scheduling algorithm, called DRDSSTS. The DRDSSTS is an extension of the DSSTS algorithm. It combines the DVS technique and the DPM technique. It can reclaim the dynamic slack time generated from the early completion task to adjust to the processor speed. Moreover, it can use the DPM technique to put the processor into a dormant mode to save energy when the processor is in an idle mode. The rest of the paper is organized as follows. In Section 2, the preliminaries are introduced. We describe a dynamic speed sporadic tasks scheduling algorithm, called DSSTS, in Section 3. We present a dynamic reclaiming dynamic speed sporadic tasks scheduling, called DRDSSTS in Section 4. The experimental results are presented in Section 5 and conclude with the summary in Section 6. 2 Preliminaries 2.1 Processor and Power Model A wide range of processors like Intel Xscale PXA 270 and AMD A8 6410 support a DVS technique. They provide variable speed levels from the minimum to maximum available speed. Let S max be the maximum available speed. We normalize the speed with respect to S max , the processor speed can be operated at a set of [ S min ,1] , where S min is a minimum normalized speed. We ignore the overhead of changing speed. The overhead can be incorporated in the worst case execution time of a task. The processor power model is studied in [14-16]. The processor power ( P ) can be expressed as following: P Ps h( Pind Cef S m )
(1)
Where Ps is a static power; Pind is the frequency-independent active power, Cef stands for the effective switching capacitance, S stands for the running speed of the processor, and m (in general, 2 m 3 ) is a dynamic power exponent. When the system actively executes a task, the coefficient h is equal to 1; otherwise, h 0 . For energy efficiency, the critical speed ( S crit ) is proposed in [3], which is a minimum energy-efficient speed. When the speed of the processor is not equal to the critical speed, the energy consumption will increase. To ensure the performance of the system and reduce the energy consumption, no task should be executed at a speed lower than critical speed. The energy consumption of processor ( E ) in the time interval [t1 , t 2 ] can be
t2
expressed by E Pdt . The processor has three modes: active mode, idle mode, dormant mode. t1
The dormant mode consumes less energy. Therefore, we can switch the processor to the dormant mode by DPM to save energy. But this action exist energy overheads and time overheads [17]. Let Eo be the energy overhead from the dormant mode to the active mode and Pidle be the energy consumption of the processor when it is in an idle mode. The break-even time t o can be expressed by t o
Eo . When the length of the idle interval is larger than the break-even time, Pidle
the processor can be put into a dormant mode by DPM to reduce the energy consumption. 2.2 System Model We consider a set of n sporadic real time tasks, represented as T {T1 , T2 ,, Tn } . The task set T is scheduled by the earliest deadline first (EDF) algorithm on an uni-processor system. Each task Ti is assumed to be dependent (because it accesses to shared resourced each other) and preemptible. The task Ti can be described by ( Pi , Ci , Z i ) , where Pi is the minimum separation period between the release of two consecutive instances of a sporadic task Ti ; C i is a worst case execution time of a sporadic task Ti at maximum speed; Z i is a list of the critical section. Note that the time unit is millisecond (ms). Let P (Ti ) be a priority of a task Ti . In this paper, we assume that the relative deadline of a sporadic task is equal to its minimum separation period. We sort tasks in ascending order of their minimum separation periods i.e. ( P1 P2 ,, Pn ) . We also assume that the execution time of a task scales linearly with the processor speed, i.e. the execution time of a task Ti under speed S is equal to
Ci . S
We assume that the sporadic task can access to a set of m shared resources, represented as R {R1 , R2 , Rm } . The shared resources must be accessed in a mutually exclusive manner. There are many methods, such as semaphores, locks, and monitors to enforce exclusive access [18]. We assume that semaphores are used to ensure mutually exclusive access to shared resources. The critical section is a time interval that a task has been granted to access to shared resources and other tasks which want to access to the same shared resources will be blocked. The critical section [11] of a task Ti is defined by Z i {Zi,1, Zi,2 ,, Zi,n } where Z i , j is j th critical section of a i
task Ti . We assume that critical sections of a task are properly nested. When a task wants to enter the critical section, it must use the semaphores to lock the critical section. In addition, when a task completes its execution, it must use the semaphores to unlock the critical section. Due to access to shared resources, a task may be blocked by another lower priority task. The worst case blocking time of task Ti is defined by Bi . It is the maximum blocking time of all task instances of task Ti . There are many protocols, such as, SRP [19] and DPCP [20] to solve the shared resources problem. We use SRP to synchronize the task’s access to shared resources. A preemption level is assigned to each task under SRP. Let i and j be the preemption level of the task Ti and T j respectively. The task Ti can preempt the task T j only if i j . The preemption levels of tasks are assigned inversely proportional to its relative deadline (i.e., i j Pi Pj ). The
preemption level of each shared resource Ri is the highest preemption level of all tasks that may access to Ri . The system ceiling is the preemption level of all shared resources that are in use currently. The system ceiling is updated each time an allocated to a task or freed. Especially, it is equal to 0 when all shared resources are free. According to [19], the feasibility condition of the SRP is given as following: Theorem 1 [19] A set of n (periodic and aperiodic) tasks with relative deadlines equal to their respective periods is schedulable by EDF scheduling if k
Ci Bk 1. Pk i 1 Pi
k , 1 k n,
Where Bk is the maximum blocking time of all task instances of a task Tk . 2.3 Motivation Zhang and Chanson [5] have studied low energy scheduling for real-time periodic tasks with non-preemptible critical sections and proposed a dual-speed (DS) algorithm. Note that the critical sections cannot be preempted in the DS algorithm. There are two speeds calculated in the DS algorithm. The low speed S L [5] is a static minimum speed without considering the blocking time and it must ensure that a set of periodic tasks is feasibly scheduled by EDF. It can be expressed as following: n
C
Pi i 1
SL
(2)
i
The high speed S H [5] considers the worst case blocking time for each task and it can be expressed as following: Ci max{B j | Pk Pj } S H . (3) Pk i 1 Pi k
k , 1 k n,
To reduce energy consumption, the DS algorithm begins to execute with S L and it switches to S H when the task is blocked by another lower priority task until the blocked task completes its execution. Consider a simple real time system with three sporadic tasks T1 , T2 , T3 as shown in Figure 1(a). From Figure 1(a), we can see that the minimum separation period of T1 , T2 and T3 are 4, 5 and 10, respectively. The worst case execution time of T1 , T2 and T3 are 1, 1 and 2, respectively. The yellow boxes and white boxes stand for the critical section and the non-critical section of the task, respectively. The upward arrow and the down arrow stand for the arriving time and the deadline of the task, respectively. We assume that the processor can provide continuous speed from 0.15 to 1.0 and its power model can be expressed by P 0.08 1.52S 3 . Its idle power is 0.085 and the critical speed is S crit 0.3 [21]. The sporadic tasks are scheduled by the DS algorithm [5] in the time interval [0, 20] as shown in Figure 1(b). According to [5], the low speed and the high speed are S L 0.65 and S H 0.85 . At time 0, the task T2 and the task T3 arrive. Due to the priority of task T2 is higher than that of the task T3 , the task T2 begins to execute at the low speed S L . At time 0.5, the task T1 arrives and it is blocked by the task T2 . The task T2 switches to the high speed S H and it completes its execution at time 1.3. At time 1.3, the task T1 begins to be executed at the high speed S H and it completes its execution at time 2.48. At time 2.48, the task T3 begins to be executed at the low speed S L and it completes its execution at time 5.56. At time 6, the task T1 arrives and begins to be executed at the low speed S L . It completes its execution at time 7.54. At time 7, the task T2 arrives and its priority is lower than that of the task T1 . At time 7.54, the
task T2 begins to be executed at the low speed S L and it completes its execution at time 9.08. At time 12, the task T3 arrives and it begins to be executed at the low speed S L . At time 13, the task T1 arrives and it is blocked by the task T3 . The task T3 switches to the high speed S H and it completes its execution at time 14.59. At time 14.59, the task T1 begins to be executed at the high speed S H and it completes its execution at time 15.77. Finally, the task T2 which arrives at time 15 begins to be executed at the low speed S L and it completes its execution at time 17.33. Note that the low speed and the high speed in the DS algorithm are a static speed. This can result in a higher speed than necessary and it will consume more energy. Jejurikar [7] has shown that the high speed S H in the DS algorithm, based on a sufficient feasibility test, is not optimal and proposed a high speed for each task. In addition, Zhang and Guo [13] have studied the independent sporadic tasks low power scheduling problem and have proposed a dynamic speed to schedule the sporadic tasks for energy efficiency. In this case, the high speed of task T1 is 0.75 [7]. The improved energy efficiency task schedule with dynamic speed is shown Figure 1(c). At time 0, the task T2 and the task T3 arrive. The dynamic low speed is S L 0.4 , the task T2 begins to be executed at the dynamic low speed S L 0.4 . At time 0.5, the task T1 arrives and it is blocked by the task T2 . The dynamic low speed is S L 0.65 . The task T2 switches to the high speed S H 0.75 and it completes its execution at time 1.57. At time 1.57, the task T1 begins to be executed at the high speed S H 0.75 and it completes its execution at time 2.90. At time 2.90, the task T3 begins to be executed at the dynamic low speed S L 0.65 . At time 4.5, the task T1 does not arrive and the dynamic low speed is S L 0.4 . The task T3 switches to the dynamic low speed S L 0.4 . At time 5, the task T2 does not arrive, the dynamic low speed is SL 0.2 . Due to it is smaller than the critical speed ( S crit 0.3 ), the dynamic low speed is S L 0.3 . At time 6, the task T1 arrives and it is blocked by the task T3 and the dynamic low speed is S L 0.45 . The task T3 switches to at the high speed S H 0.75 and it completes its execution at time 6.61. At time 6.61, the task T1 begins to be executed at the high speed S H 0.75 and it completes its execution at time 7.94. Note that the task T2 arrives at time 7 and the dynamic low speed is S L 0.65 . At time 7.94, the task T2 begins to be executed at low speed S L 0.65 and it completes its execution at time 9.48. The task T3 arrives at time 12 and the dynamic low speed is 0.2. Due to it is smaller than the critical speed ( S crit 0.3 ), the dynamic low speed is S L 0.3 . It begins to be executed at the dynamic low speed S L 0.3 at time 12. At time 13, the task T1 arrives and it is blocked by the task T3 and the dynamic low speed is S L 0.45 . The task T3 switches to the high speed S H 0.75 and it completes its execution at time 15.27. Note that the task T2 arrives at time 15 and the dynamic low speed is S L 0.65 . At time 15.27, the task T1 begins to be executed at the high speed S H 0.75 and it completes its execution at time 16.60. At time 16.60, the task T2 begins to be executed at the dynamic low speed S L 0.65 . The task T1 does not arrive at time 17 and the dynamic low speed is S L 0.4 . At time 17, the task T2 switches to the dynamic low speed S L 0.4 and it completes its execution at time 18.92. We compute the energy consumption of the DS algorithm in Figure 1(b) and improved schedule in Figure 1(c). The energy consumption of the DS algorithm and improved schedule is 9.90 mJ and 8.55 mJ, respectively. Thus, the improved schedule consumes about
9.90 8.55 *100% 13.64% less energy than that of the DS algorithm. 9.90
3. Dynamic speed sporadic tasks scheduling algorithm In this section, we present a dynamic speed sporadic tasks scheduling (DSSTS) algorithm. It is based on the SRP resources protocol and the EDF scheduling policy. Similar to the DS algorithm, the task begins to be executed at the dynamic low speed SL i.e. we don’t consider the blocking time of another lower priority task. When the task is blocked by another lower priority task, the task switches to the high speed i.e. the blocking time is taken into account. Before presenting the DSSTS algorithm, we introduce some definitions. Definition 1 DTS is a delayed task set. The task Ti in DST belongs to the sporadic task set T {T1 , T2 ,, Tn } and the interval time that Ti releases an instance is more than its minimum separation period Pi . Definition 2 S H
i
is the high speed of the task Ti , i.e. it considers the blocking time of task
Ti .
The independent sporadic task scheduling has been studied in [15, 22]. The low speed SL in the DSSTS algorithm does not take the blocking time into account. It can be treated as the independent sporadic task scheduling. The low speed SL can be computed by the Algorithm 1. Algorithm 1 Compute the low speed SL 1. DTS T , S L 0 ; //initial condition 2. If task Ti releases an instance and Ti DTS . 3.
SL
Ci , DTS {Ti } ; Pi
4. Else if task Ti does not release an instance after Pi time units and Ti DTS SL -
Ci , DTS {Ti } ; Pi
5. If there is no task to schedule 6. DTS T , S L 0 ; Next, we will compute a high speed that it takes the blocking time causing by another lower priority task into account. The optimal feasibility condition of shared resources based on the EDF scheduling policy is stated below. Theorem 2 [7] A periodic task set, sorted in non-decreasing order of their period, can be feasibly scheduled under the EDF scheduling policy, at a constant slowdown of S H
i
if the
following constraints are satisfied: t , P1 t Pi ,
i t 1 (max{B j | Pi Pj } Ck ) t , and SL SHi S Hi k 1 Pk
(4)
max{B j | Pi Pj } is the maximum blocking time of the task Ti while guaranteeing all
i
t
P C
higher priority task deadlines.
k 1
k
k
is the processor demand for all higher priority tasks.
i t 1 (max{B j | Pi Pj } Ck ) is the total processor demand at [0, t ] under S H i and it must S Hi k 1 Pk
be smaller or equal to the available time at [0, t ] . Corollary 1 A sporadic task set with all tasks release an instance in their minimum separation period, sorted in non-decreasing order of their minimum separation period, can be feasibly scheduled under the EDF scheduling policy, at a constant slowdown of S H
i
if the
following constraints are satisfied: i t (max{B j | Pi Pj } Ck ) k 1 Pk t , P1 t Pi , S Hi , and SL SHi (5) t
Algorithm 2 The dynamic speed sporadic tasks scheduling (DSSTS) algorithm 1. Compute SL and S H , SL Scrit and SH Scrit ; i
i
2. When the task Ti arrives. 3. If (process is idle before task arrival ) 4. Task Ti is executed at the dynamic low speed SL . 5. End if 6. If ( P(Ti ) P(Tk ) ) // Tk is currently executing task. 7. If ( Ti preempts Tk is successful) 8. Task Ti is executed at the dynamic low speed SL . 9. Else /* Ti is blocked by Tk */ 10.
Task Tk switches to the high speed S H and the task Ti is executed at the high i
speed S H
i
until its completion.
11. End if 12. Else 13 The currently executing task Tk continues to execute with previous speed. Algorithm 2 describes the proposed DSSTS algorithm. The task is executed at the dynamic low speed (without being blocked) or at the high speed (with being blocked) in the DSSTS algorithm. The high speed can be computed before the task scheduled. The dynamic low speed can be computed by the Algorithm 1 (line 1). The task Ti arrives and the processor is idle before its arrival, task Ti begins to be executed at the dynamic low speed SL (line 2-5). In addition, the task Ti arrives and it can successfully preempt the execution of the current task Tk , the task Ti is executed at the dynamic low speed SL (line 7-8). Finally, the task Ti arrives and it is blocked by the current task Tk , the task Tk switches to the high speed S H . When the task Tk i
completes its execution, the task Ti begins to be executed at the high speed S H
i
until its
completion (line 9-10). Next, we will prove that the DSSTS algorithm can guarantee the feasibility of the system. Theorem 3 A sporadic task set with all tasks release an instance in their minimum separation period, sorted in non-decreasing order of their minimum separation period, can be feasibly scheduled by the DSSTS algorithm, at the low speed SL computed by the Algorithm 1 and the high speed S H
computed by the equation (5).
i
Proof. The proof is similar to the ways in [5, 7, 11]. We assume that the DSSTS algorithm cannot be schedulable and t is the earliest time that a task will miss its deadline. Let t ( t t ) be the latest time point that no task which releases an instance before t has deadline at or before t . If t dost not exist, let t 0 . Therefore, there is no idle time in [t , t ] and there are two cases in [t , t ] . One case is that the set of tasks, denoted by , that arrive no earlier than t and have deadlines at or before t . The other case is that the processor is idle before t or there exists a task, denoted by Tk that arrives before t and has deadline after t . Case 1: there is no blocking has occurred during [t , t ] . In this case, only tasks in are executed during [t , t ] and all tasks are executed at the dynamic low speed SL . Let Dt ,t be the total processor demand in [t , t ] . Because a task will miss its deadline at t , the Dt ,t must be satisfied following equation (6). Dt ,t t t
(6)
We divide [t , t ] into adjacent subset intervals, i.e. { S , S , S } . S stands for the length of 1
2
m
j
time interval such as the first subset interval is [t , t S ] . The length of all subset intervals is 1
m
equal to
S i 1
i
t t . Corresponding to the time intervals, the task is executed at the speed
{S1 , S 2 ,S m } . According to [23], for any time interval S j , the processor demand in S j ,
denoted by D , is equal to S Sj
n
j
i 1,Ti DTS
Ci . Due to DTS contains all tasks that did not release S j Pi
an instance at their minimum separation period before the start of the interval sj . Therefore, Sj
n
Ci . Now, we find that D S j S j . The total process demand of all subset interval is i 1,Ti DTS Pi
m
equal to
D j 1
m
Sj
S j t t which contradicts with Equation (6). j 1
Case 2: there are blockings have occurred during [t , t ] .
In this case, task Tk has begun to be executed before t . First, we discuss that there is only one blocking causing by Tk during [t , t ] . Let Ti be the task that is blocked by the task Tk . At this time, the task Ti is the highest priority task in . Let t h be the time that the task Ti is blocked and t d be the deadline of the task Ti . According to the DSSTS algorithm, the task is executed at the dynamic low speed S L when it does not be blocked. The task Tk switches to the high speed S H speed S H
i
i
when it blocks the task Ti . At the same time, task Ti is executed at the high
until its completion. The processor demand consists of three parts:
1. The task is executed at the dynamic low speed S L in the interval [t , t h ] . According to the analysis of Case 1, the processor demand in [t , t h ] is equal to t h t .
2. The task is executed at high speed S H
i
in the interval [t h , t d ] . The processor demand in
i t t max{Bk | Pi Pk } d h C j Pj j 1 [t h , t d ] is equal to . S Hi
3. The task is also executed at the dynamic low speed S L in the interval [t d , t ] . The processor demand in the interval [t d , t ] is equal to t t d . We assume that there is a task that misses its deadline at time t . Thus, we have i t t max{Bk | Pi Pk } d h C j j 1 Pj t t t t th t d S Hi
i t t max{Bk | Pi Pk } d h C j Pj j 1 td th S Hi
i P max{Bk | Pi Pk } i C j j 1 Pj S Hi which contradicts with Equation Since Pi td th , we have Pi
(5). Next, we will discuss more than one blocking during [t , t ] . Since, the task Tk is the only task that arrives before t and is executed before t . The higher priority task is blocked by task Tk or task in . If the task is blocked by task Tk ; the maximum blocking time is max{Bk | Pi Pk } . It has been considered in the high speed. Hence, the task can be feasibly by the DSSTS algorithm as we have discussed. If the task is blocked by the task in . The blocking time is a part of the execution time of the task in . This does not increase the processor demand during the time interval. Therefore, the task schedule is still schedulable. 4. Dynamic reclaiming dynamic speed sporadic tasks scheduling The DSSTS algorithm assumes that the task executes with its worst case execution time. In
fact, the actual execution time of the task is always smaller than its worst case execution time [1]. Therefore, the task will complete early and generate the slack time at run time. This slack time can be used to decrease the processor speed to reduce the energy consumption. For energy efficiency, we present a dynamic reclaiming dynamic speed sporadic tasks scheduling (DRDSSTS) algorithm. The DRDSSTS algorithm is an extension to the DSSTS algorithm. The dynamic reclaiming method has been studied in [1,5]. We build a free run time list (FRT-list) [1] to manage the slack time. Each item in FRT-list contains task’s execution and task’s deadlines and it is sorted by its priority. The highest priority task is at the head of the FRT-list. Before we formally present the algorithm, we introduce some notions:
U iF (t ) is the total unused time in the FRT-list at time t that can be used by the task Ti .
U irem (t ) is the available time of the task Ti at time t .
Wi rem (t ) is a worst case residue execution time of the task Ti under the maximum speed at
time t .
Let
S temp be
Wi rem (t ) . U (t ) U irem (t ) F i
When the task arrives, we set Uirem (t ) Wi rem (t ) Ci . The task is executed at the speed S , U irem (t ) and Wi rem (t ) are equal to
U irem (t ) S
and
Wi rem (t ) , respectively. Algorithm 3 describes S
the proposed DRDSSTS algorithm. The dynamic low speed S L and the high speed S H
i
in
Algorithm 3 have the same meanings as in the DSSTS algorithm. If the task Ti arrives and the processor is idle; we compare the speed S temp with SL and decide what speed the task Ti will be executed (line 1-6). If task Ti successfully preempts the task Tk ,we also compare the speed S temp with SL and decide what speed the task Ti will be executed (line 7-11). When the task
Ti is blocked by the task Tk , the task Tk switches to the high speed S H (line 12-13). In i
addition, the task Tk completes its execution; the task Ti begins to be executed. We compare the speed S temp with S H
i
and decide what speed the task Ti will be executed (line 14-16).
When the task completes its execution and Uirem (t ) 0 , insert it into the FRT-list. We put the processor into the dormant mode by DPM when the processor is in idle status and U iF (t ) to (line 23-26). Algorithm 3 The DRDSSTS algorithm 1.When the task Ti arrives. 2. If (process is idle before task arrival ) 3.
If( Stemp S L )
4. 5.
Task Ti is executed at S temp . Else task Ti is executed at the dynamic low speed SL .
6. End if 7. If ( P(Ti ) P(Tk ) ) // Tk is currently executing task. 8. If ( Ti preempts Tk is successful) If( Stemp S L )
9. 10. 11. 12.
Task Ti is executed at S temp . Else task Ti is executed at the dynamic low speed SL . Else /* Ti is blocked by Tk */
13.
Task Tk switches to the high speed S H
14.
If ( Stemp S H ) //task Tk complete its completion
until its completion.
i
15. 16.
i
The blocked task Ti is executed at Else the blocked task Ti is executed at S H
S temp
i
until its completion.
17. End if 18. End if 19. When the task Ti completes 20. If ( Uirem (t ) 0 ) 21. Insert it into FRT-list 22. End if. 23. When the processor is in an idle mode 24. If U iF (t ) to 25. Put the processor into a dormant mode by DPM. 26 End if The below rules are used in the DRDSSTS algorithm: 1. The task Ti executes. If U iF (t ) 0 , it consumes the run time starting from the FRT-list . Else it consumes the run time from U irem (t ) . In addition, the Wi rem (t ) reduces the same amount time as U irem (t ) . 2. When the processor is idle, we reduce the run time in the FRT-list with a rate equal to that of the passage of time. Before proving the DRDSSTS algorithm is feasible. We first prove that a task completes its
execution before its time budget is depleted. Lemma 1 A sporadic task set is scheduled by the DRDSSTS algorithm, all tasks will complete before depleting its time budget. Proof: The proving method is similar to the method in [5]. Randomly chooses a time interval in which the task Ti is continuously executing. Let t and t be the time that the task Ti begins to be executed and completes its execution, respectively. According to Algorithm 3, the worst case residue execution time of task Ti and its time budget (available time) at t t ( 0 t t t Uirem (t ) UiF (t ) ) are respectively: Wi rem (t t ) Wi rem (t )
and
Wi rem (t ) t U irem (t ) U iF (t )
U irem (t ) U irem (t t ) rem F U i (t ) (t - U i (t ))
(7)
t U iF (t ) t U iF (t )
Thus, we have U irem (t t ) U irem (t ) , ( t, 0 t U irem (t ) U iF (t )) Wi rem (t t ) Wi rem (t )
(8)
(9)
Which means that the ratio between U irem (t ) and Wi rem (t ) is independent on the executing of the task. Since the initial ratio is set to 1. Therefore, a task will not exhaust its own budget before it completes. The follow theorem can prove that the DRDSSTS algorithm is feasible. Theorem 4 Given a sporadic task set that all tasks release their instances in their minimum separation period is scheduled by the DRDSSTS algorithm, all tasks complete their execution before their deadline if SL computed by the Algorithm 1 and the high speed S H
i
computed by
the equation (5). Proof. According to Lemma 1, all tasks will complete before its time budget is depleted. To prove that all tasks complete their execution before their deadline. We only need to prove that the time budget is always consumed before its deadline. We prove by the contradiction and assume that there is a task missing its deadlines. Let t be the earliest time that a time budget of task Ti is not depleted at its deadline. Let t be the latest time point that no task arrives before t and has a deadline at or before t . There is not any time budget in FRT-list that the task has a deadline at or before t . If the time point t cannot exist, let t 0 . It means that the time budget is continuously consumed in [t , t ] . The time budget consumed can be divided two cases in [t , t ] . One case is that the time budget is generated by the tasks which arrive no earlier than t and have a deadline before t , denoted as time budget A. The other case is that the time budget is generated the tasks have a deadline after t , denoted as time budget B. Case 1: Only the time budget in A is consumed. In this case, the time budget is generated from a new task arriving in [t , t ] . The time budget t t Ci / S L . Because there is still time budget left t . The time budget in i 1 Pi n
in A is limited by
A is larger than the time budget consumed in the interval, which is t t . Therefore,
t t Ci / S L t ' t (10) i 1 Pi n
t t Ci / S L (t ' t ) Pi i 1 n
P
n
Ci
i 1
SL
(11) (12)
i
Which contradicts with the value of the dynamic low speed S L (According to Algorithm 1, n
is always higher or equal to
Ci
P i 1
SL
).
i
Case 2: The time budget in A and B is consumed. The time budget in B is consumed at anywhere [t , t ] . We assume that the task Tk has a deadline later than t and starts to consume the time budget in FRT-list with deadlines earlier than equal to t . It first depletes the time budget in A and starts to consume the time budget in B. Let t1 be the latest time point before t that the time budget of the task has a deadline later than t is consumed. The time budget in B is consumed in [t , t ] . Therefore, the t1 must exist and t t1 t . At least a task with a deadline equal to t or earlier must have arrived and be blocked before the time budget in FRT-list is depleted . Otherwise, task Tk had been preempted before t1 or t t1 is violated. Note that the time budget in FRT-list with a deadline earlier than equal to t is depleted at t1 . We assume that the task Ti is first blocked by the task Tk . Let t2 ( t2 t1 ) be the arrive time of the task Ti . The time budget consumed by the task Tk was generated before t2 . The time budget consumed in [t1 , t ] is generated by [t2 , t ] . In addition, the time budget of the task with deadlines earlier than equal to t is also consumed in [t2 , t ] . The task switches to the high speed S H in [t2 , t ] . The amount of time budget generated by i
the tasks has deadline at or before t after t2 is bounded by
t t2 Ck / S Hi . In addition, k 1 Pk i
according to the DRDSSTS algorithm, the task begins to be executed at the high speed S H in i
[t2 , t1 ] . The total amount of time budget that can be consumed in [t2 , t ] is bounded by i t t2 Bk / S Hi Ck / S Hi , where Bk is maximum blocking time of all task instances of the k 1 Pk
task Tk . Because there is a task that will miss its deadline at t , i.e. there is still time budget left t . Therefore, i t t2 Bk / S Hi Ck / S Hi t t2 (13) k 1 Pk
Let t t t2 , we have P1 t Pi (1 i n) and
i t Bk Ck k 1 Pk S Hi t
(14)
which contradict with Equation (5). 5. Performance Evaluation In this section, we implement an event scheduling simulator written in C language. The simulator is based on the EDF scheduling policy and the SRP protocol. We use an Intel Xscale PXA 270 processor. The power model is approximately as P 0.08 1.52 * S 3 and the idle power is 0.085 [21]. In addition, we assume that the processor can be operated from the minimum speed of 0.17 to the maximum speed of 1.0. To evaluate the effectiveness of our proposed algorithm, we implement fourth algorithms in the simulator. First, the SRP algorithm, all tasks are executed at the high speed. The energy consumption of the SRP algorithm is used as baseline. The energy consumption of other algorithms is normalized with this baseline. Second, the DS algorithm, proposed in [5], the task is executed at the static low speed or the high speed. Third, the DSSTS algorithm, it schedules the task set that tasks are executed at a dynamic low speed or the high speed. The task begins to be executed at dynamic low speed SL i.e. we don’t consider the blocking time of another lower priority task. When the task is blocked by another lower priority task, it switches to the high speed i.e. the blocking time is taken into account. But, the DSSTS algorithm assumes that the task is executed with its worst case execution time. Fourth, the DRDSSTS algorithm, it is an extension to the DSSTS algorithm and reclaims the dynamic slack time generated from the early completion task to reduce the energy consumption. We randomly generate the sporadic task set in simulation experiments. Each task set contains 15 sporadic tasks. We randomly choose the seven sporadic tasks that must access to shared resources. In other words, these sporadic tasks have critical sections; the other tasks do not have critical sections. The minimum separation period of the sporadic task Ti is generated from [10, 1000]. The worst case execution time (WCET) of the sporadic task is chosen from 1 to its minimum separation period. After the task set is generated, the WCET is adjusted to ensure that the system utilization can’t exceed the given value. The position and the length of the critical section with each sporadic task are selected randomly. The maximum length critical section Z i , j is equal to bf *WCET , where bf is the size of the critical section as a percentage of the task’s WCET. The actual workload is achieved by modifying the
WCET ratio (BCET is the best-case BCET
execution time). The actual execution of the task is uniformly distributed between the task's BCET and WCET. Each simulation experiment contains 100 sporadic task sets and the simulation time is set to 100000 time units. The experimental results are average of the 100 task sets. In addition, we assume that the energy overhead from the dormant mode to the active mode is equal to 200 uJ, i.e. Eo 0.2 mJ. 5.1 Performance of the DSSTS algorithm In this section, we assume that all tasks are executed with their worst case execution time. We fix the bf 0.15 and bf 0.3 , respectively, and vary the system utilization between 0.15 to 0.8. The experimental results are shown in Figure 2 and Figure 3. From Figure 2 and Figure 3, we can see that the energy consumption of all algorithms is sensitive to the system utilization. When the system utilization is lower than 0.3, the energy
consumption of the DS algorithm and the DSSTS algorithm is the same. This is because the speed which is computed by these algorithms is lower than the critical speed ( Scrit 0.3 ). In this case, all tasks are executed at the critical speed. When the system utilization is larger than 0.3, the normalized energy of the DSSTS algorithm decreases and the normalized energy of the DS algorithm increases. For DS algorithm, the system utilization increases; the worst case execution time of task increases. In addition, the static low speed also increases. For DSSTS algorithm, the dynamic low speed is always lower than the static low speed. When the static low speed increases, the dynamic low speed may not change. The DSSTS algorithm schedules the task with the dynamic low speed. Thus, the energy consumption of the DSSTS algorithm decreases quickly. The block factor ( bf ) increases, the maximum blocking time increases. Thus, the high speed increases. The energy consumption of SRP algorithm increases. As for other algorithm, they are not sensitive to the block factor. This is because the task is only executed at the high speed when it is blocked by another lower priority task. All in all, we find that the energy consumption of the DSSTS algorithm is lower than that of the DS algorithm. The DSSTS algorithm reduces up to average 37.28% energy consumption compared with the DS algorithm. 5.2 Performance of the DRDSSTS algorithm 5.2.1 Effect of actual workload We fix the system utilization to 0.5 and the block facor bf 0.15 and vary the ratio of WCET/BCET between 1 and 10. The experimental results are shown in Figure 4. From Figure 4, we can see that the energy consumption of all algorithms is sensitive to the ratio of WCET/BCET. When the ratio of WCET/BCET is equal to 1, there is no dynamic slack time. Thus, the energy consumption of the DRDSSTS algorithm and the DSSTS algorithm is the same. With the ratio of WCET/BCET increasing, the average execution time of the task decreases when the ratio of WCET/BCET is larger than 1. However, the normalized energy consumption of the DS algorithm and the DSSTS algorithm increases. This is because the rate of reducing the energy consumption of two algorithms is lower than that of the SRP algorithm and we use the energy consumption of the SRP algorithm as a baseline. In addition, we find that the energy consumption of the DRDSSTS algorithm is always lower than that of other algorithms. This is because the DRDSSTS algorithm uses not only the DVS technique, but also the DPM technique to reduce the energy consumption. All in all, the DRDSSTS algorithm consumes less 0~23.78% energy than that of the DSSTS algorithm and it consumes average 45.63% less energy than that of the DS algorithm. 5.2.2 Effect of utilization We fix the ratio of WCET/BCET to 5 and the block factor bf 0.15 and vary the system utilization between 0.15 and 0.8. The experimental results are shown in Figure 5. From Figure 5, we can see that the energy consumption of all algorithms is sensitive to the system utilization. When the system utilization is lower than 0.3, the energy consumption of the DS algorithm and the DSSTS algorithm is the same. But the energy consumption of the DRDSSTS algorithm is lower than that of two algorithms. The speed which is computed by these algorithms is lower than the critical speed ( Scrit 0.3 ). All tasks are executed at the critical speed. the DRDSSTS algorithm can use DPM technique to reduce the energy consumption. When the system utilization is larger than 0.3, the normalized energy of the DSSTS algorithm and the DRDSSTS algorithm decreases. This is because the dynamic low speed is always lower than the
static low speed. When the static low speed increases, the dynamic low speed may not change. Thus, the energy consumption of the algorithm decreases quickly. In addition, with the system utilization increasing, the percent of energy saving between the DRDSSTS algorithm and the DSSTS algorithm decreases. This is because the available slack time can’t be used to reduce the energy consumption. All in all, the energy consumption of the DRDSSTS algorithm is always lower than other algorithms. It can reduce the energy consumption up to 2.64%~37.16% over the DSSTS algorithm and it consumes 35.16%~71.15% less energy than that of the DS algorithm. 5.2.3 Effect of discrete speed We conduct an experiment to explain the side effect of discrete speed on the energy consumption of each approach. The processor speed is limited to {0.17, 0.33, 0.5, 0.67, 0.83, 1.0} [21]; If an assigned speed is not equal to the speed provided by the processor discrete speed, the execution speed of the task must be adjusted to a speed level higher than the assigned speed to meet the deadline constraints. We fix the ratio of WCET/BCET to 5 and the block factor bf 0.15 and vary the system utilization between 0.15 and 0.8. The energy consumption of the DSSTS algorithm and the DRDSSTS algorithm is represented as DSSTS_DISCRETE and DRDSSTS_DISCRETE, respectively when the discrete speeds are applied. We normalize the energy consumption with respect to the energy consumption of the DSSTS_DISCRETE when the system utilization is equal to 0.8. The experimental result is shown in Figure 6. From Figure 6, we can see that the energy consumption of DSSTS_DISCRETE and DRDSSTS_DISCRETE are higher than that of DSSTS and DRDSSTS, respectively. The energy consumption of DSSTS_DISCRETE which increases approximately 7.36% compares with the DSSTS algorithm. In addition, the energy consumption of DRDSSTS_DISCRETE which increases approximately 5.88% compares with the DRDSSTS algorithm. The performance gap between the DRDSSTS algorithm and the DSSTS algorithm increases because the reclaimed slack may be adequate to put the processor into in a dormant mode. 5.2.4 Effect of number of tasks We conduct an experiment to explain the effect of the number of tasks on the energy consumption of each approach. The sets of tasks consist of 5 tasks and 30 tasks respectively.2 tasks have critical sections in 5 tasks set. 15 tasks have critical sections in 30 tasks set. We fix the ratio of WCET/BCET to 5 and the block factor bf 0.15 and vary the system utilization between 0.15 and 0.8. The experimental results are shown in Figure 7 and Figure 8. As shown in Figure 5, 7, 8; with increasing number of tasks, the energy consumption will slightly increase for all algorithms. This is because the available slack time has fewer chances to reduce energy consumption. The DRDSSTS algorithm can reduce the energy consumption up to average 32.44% over the DSSTS algorithm in Figure 7. In addition, the DRDSSTS algorithm can reduce the energy consumption up to average 21.28% and 16.36% over the DSSTS algorithm in Figure 5 and Figure 8, respectively. Energy savings of the DRDSSTS algorithm continues to decrease. This is because when the number of tasks increases, there are fewer opportunities to use the available slack time to reduce energy consumption. 6 Conclusions In this paper, we have studied the problem of minimizing overall energy consumption for sporadic tasks with shared resources in a real time system. We first have proposed a novel scheduling algorithm, called DSSTS, for the sporadic tasks with shared resources. The DSSTS
algorithm assumes that each task executes with its worst case execution time. It considers the general power model. In addition, it is based on the SRP resources protocol and EDF scheduling policy. The task will be scheduled at the dynamic low speed and it switches to the high speed when it is blocked by another lower priority task. For energy efficiency, we have proposed a dynamic reclaiming dynamic speed sporadic tasks scheduling algorithm, called DRDSSTS. The DRDSSTS is an extension to the DSSTS algorithm. It combines the DVS technique and the DPM technique. It can reclaim the dynamic slack time generated from the early completion task to adjust to the processor speed. Moreover, it can use the DPM technique to put the processor into a dormant mode to save energy when the processor is in an idle mode. The experimental results show that the DRDSSTS algorithm can reduce the energy consumption up to 2.64%~37.16% over the DSSTS algorithm and it consumes 35.16%~71.15% less energy than that of the DS algorithm. Acknowledgements This work has been supported by the Introduction of Talents Huaqiao University Scientific Research Projects (Contact 16BS104) and the National Natural Science Foundation of China (Contact 61403150) and Project of science and technology plan of Fujian Province of China (Contact 2017H01010065 ) .
References [1] H Aydin, R Melhem, and Moss D. Dynamic and Aggressive Scheduling Techniques for Power-Aware Real-Time Systems. In: proceedings of the 22th Real-Time Systems Symposium, 2001, pp. 192-211. [2] A Elija, B.M l-Hashimi, P Eles. low-energy standby sparing for hard real-time systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 31(3) (2012), 329-342. [3] R Jejurikar, C Pereira, R Gupta. Dynamic Slack Reclamation with Procrastination Scheduling in Real-Time Embedded Systems. In: proceedings of the 42th Design automation conference, 2005, pp. 111-116 [4] L Niu, W Li. Energy-Efficient Fixed-Priority Scheduling for Real-Time Systems Based on Threshold Work-Demand Analysis. In: proceedings of the 9th international Hardware/Software Codesign and System Synthesis, 2011, pp. 159-168. [5] F Zhang. S. T Chanson. Processor voltage scheduling for real-time tasks with non-preemptible sections. In: proceedings of IEEE Real-Time Systems Symposium, 2002, pp. 235-245. [6] J Lee, K Koh, C Lee. Multi-Speed DVS Algorithms for Periodic Tasks with Non-Preemptible Sections. In: proceedings of IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, 2007, pp. 459-468. [7] R Jejurikar. Energy Aware Non-Preemptive Scheduling for Hard Real-Time Systems. In: proceedings of 17th Euromicro Conference on Real-Time Systems, 2005, pp. 21 - 30. [8] J Li, L Shu, J Chen, G Li. Energy-Efficient Scheduling in Nonpreemptive Systems With Real-Time Constraints. IEEE Transactions on Systems Man and Cybernetics Systems, 43(2) (2013) 332-344. [9] R Jejurikar, R Gupta. Dual mode algorithm for energy aware fixed priority scheduling with task synchronization. In Workshop on Compilers and Operating System for Low Power, 2003. [10] R Jejurikar, R Gupta. Energy-Aware Task Scheduling With Task synchronization for Embedded Real-Time Systems. IEEE Trans on Computer-Aided Design of Integrated Circuits and Systems, 25(6) (2006) 1024-1037. [11] J Wu. Energy-efficient scheduling of real-time tasks with shared resources ☆. Future Generation Computer Systems, 56 (2015), 179-191. [12] M-F Horng, C-S Huang, Y-H Kuo, J-W Hu. Scheduling Sporadic, Hard Real-time Tasks with Resources. In: proceedings of 3rd International Conference on Innovative Computing Information and Control, 2008, pp. 84-87. [13] Y.-W Zhang, R.-F Guo. Low Power Scheduling Algorithms for Sporadic Task with Shared Resources in Hard Real-Time Systems. The Computer Journal, 58(7) (2015) 1585-1597 [14] H Aydin, V Devadas, D Zhu. System-level Energy Management for Periodic Real-Time Tasks. In: proceedings of the 27th IEEE international Real-Time Systems Symposium, 2006, pp. 313-322. [15] Y.-W Zhang, R.-F Guo. Power-aware scheduling algorithms for sporadic tasks in real-time systems, The Journal of Systems and Software, 86(10) (2013) 2611-2619. [16] B Zhao, H Aydin, D Zhu. Energy Management under General Task-Level ReliaBility
Constraints. In: proceedings of the 18th Real Time and Embedded Technology and Applications Symposium, 2012, pp. 285-294. [17] Y Zhu, F Mueller. Dvsleak: combining leakage reduction and voltage scaling in feedback EDF scheduling. In: proceedings of LCTES, 2007, pp. 31-40. [18] A Silberschatz, P. B Galvin, G Gagne. Operating System Concepts. Wiley, New York, 2001. [19] T.-P Baker. Stack-based scheduling of real-time processes. Journal of Real-Time Systems, 3(1) (1991) 67 - 99. [20] M.-I Chen, K.-J Lin. Dynamic priority ceilings: a concurrency control protocol for real-time systems. Real-Time Systems, 2(4) (1990) 325-346. [21] J.-J Chen, T.-W Kuo. Procrastination determination for periodic real-time tasks in leakage aware dynamic voltage scaling systems. In: proceedings of IEEE/ACM Int Conf on Computer-Aided Design, 2007 , pp. 289 - 294. [22] A Qadi, S Goddard, S Farritor. A Dynamic Voltage Scaling Algorithm for Sporadic Tasks. In: proceedings of the 24th Real-Time System Symposium, 2003, pp. 52-62. [23] Y.-W Zhang, R.-F Guo. Power-aware fixed priority scheduling for sporadic tasks in hard real-time systems. The Journal of Systems and Software, 90 (2014) 128-137. [24] J Han, X Wu, D Zhu, H Jin, L Yang, and J Gaudiot. Synchronization-Aware Energy Management for VFI-Based Multicore Real-Time Systems. IEEE Transactions on Computers, 61(12) (2012) 1682-1696. [25] M LASheikh, Hafiz Fahad, I. Ahmad, and D. Fan. An Evolutionary Technique for Performance-Energy-Temperature Optimized Scheduling of Parallel Tasks on Multi-Core Processors." IEEE Transactions on Parallel & Distributed Systems 27(3) (2016) 668-681. [26] Tsai T H, Fan L F, Chen Y S, et al. Triple Speed: Energy-aware Real-time Task Synchronization in Homogeneous Multi-core Systems. IEEE Transactions on Computers, 65(4) (2016) 1297-1309.
Figure 1. (a)Task arriving times, deadlines and time parameters. (b) Task is scheduled by the DS algorithm. (c) Improved schedule with dynamic speeds.
1 SRP DS DSSTS
Normalized Energy Consumption
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1
0.2
0.3
0.4 0.5 Utilization
0.6
0.7
0.8
0.6
0.7
0.8
Figure 2. The DSSTS algorithm with bf 0.15 .
1 SRP DS DSSTS
Normalized Energy Consumpiton
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1
0.2
0.3
0.4 0.5 Utilization
Figure 3. The DSSTS algorithm with bf 0.3 .
1 SRP DS DSSTS DRDSSTS
Normalized Energy Consumption
0.9
0.8
0.7
0.6
0.5
0.4
1
2
3
4
5 6 WCET/BCET
7
8
9
10
Figure 4. The DRDSSTS algorithm with bf 0.15 and U tot 0.5
1 SRP DS DSSTS DRDSSTS
Normalized Energy Consumption
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.2
0.3
0.4 0.5 Utilization
0.6
0.7
0.8
Figure 5. The DRDSSTS algorithm with bf 0.15 and WCET/’BCET=5 (15 tasks).
1 DRDSSTS_DISCRETE DRDSSTS DSSTS_DISCRETE DSSTS
Normalized Energy Consumption
0.9
0.8
0.7
0.6
0.5
0.4 0.1
0.2
0.3
0.4 0.5 Utilization
0.6
0.7
0.8
0.6
0.7
0.8
Figure 6. The effect of discrete speed.
1 SRP DS DSSTS DRDSSTS
Normalized Energy Consumption
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.2
0.3
0.4 0.5 Utilization
Figure 7. The DRDSSTS algorithm with bf 0.15 and WCET/’BCET=5 (5 tasks).
1 SRP DS DSSTS DRDSSTS
Normalized Energy Consumption
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.2
0.3
0.4 0.5 Utilization
0.6
0.7
0.8
Figure 8. The DRDSSTS algorithm with bf 0.15 and WCET/’BCET=5 (5 tasks).