Trio extends the existing Linux scheduler with support for proportional-share scheduling. It uses a scheduling algorithm, called Distributed Weighted Round-Robin (DWRR), which retains the existing scheduler design as much as possible, and extends it to achieve proportional fairness with O(1) time complexity and a constant error bound, compared to the ideal fair scheduling
algorithm. The goal of Trio is not to improve interactive performance; rather, it relies on the existing scheduler for interactivity and extends it to support MP proportional fairness.

TRIO는 기존의 리눅스 스케쥴러를 확장한다. 이것은 기존의 스케쥴러 디자인을 최대한 유지하는 DWRR을 사용하고, fairness를 O(1) time complexity와 constant error bound를 갖는다. Trio의 목적은 interactive를 향상 시키는 것이 아니라, interactivity는 기존의 스케쥴러에 의존하면서 MP proportional fairness 를 높이는데 있다. 

Trio has two unique features: (1) it enables users to control shares of CPU time for any thread or group of threads (e.g., a process, an application, etc.), and (2) it enables fair sharing of CPU time across multiple CPUs. For example, with ten tasks running on eight CPUs, Trio allows each task to take an equal fraction of the total CPU time, whereas no existing scheduler
achieves such fairness. These features enable Trio to complement the mainline scheduler and other proposals such as CFS and SD to enable greater user  flexibility and stronger fairness.

Trio는 두 가지 독특한 특성이 있다. (1) 어떤 thread나 thread의 group에 있어서 사용자가 CPU time을 share할 수 있게 한다. 그리고 (2) 여러 CPU간에 CPU time을 fair share하게 한다. 예를 들어 전체 CPU time의 fraction을 공평하게 나눠가지게 한다. 이러한 특성들은 SFS나 SD보다 더 나은 flexibility와 fairness를 갖게한다. 


Over the years, there has been a lot of criticism that conventional Unix priorities and the nice interface provide insufficient support for users to accurately control CPU shares of different threads or applications. Many have studied scheduling algorithms that achieve proportional fairness. Assuming that each thread has a weight that expresses its desired CPU share, informally, a scheduler is proportionally fair if (1) it is work-conserving, and (2) it allocates CPU time to threads in exact proportion to their weights in any time interval. Ideal proportional fairness is impractical since it requires that all runnable threads be running simultaneously and scheduled with infinitesimally small quanta. In practice, every proportional-share scheduling algorithm approximates the ideal algorithm with the goal of achieving a constant error bound. For more theoretical background, please
refer to the following papers:

[1] A. K. Parekh and R. G. Gallager. A generalized processor sharing
approach to flow control in integrated services networks: The single-node
case. IEEE/ACM Transactions on Networking, 1(3):344-357, June 1993.

[2] C. R. Bennett and H. Zhang. WF2Q: Worst-case fair weighted fair queueing.
In Proceedings of IEEE INFOCOM '94, pages 120-128, Mar. 1996.

Previous proportional-share scheduling algorithms, however, suffer one or more of the following problems:

그러나 기존의 proportional-share scheduling algorithm들은 아래의 문제들을 갖고 있었다. 

(1) Inaccurate fairness with non-constant error bounds;
(2) High run-time overhead (e.g., logarithmic);
(3) Poor scalability due to the use of a global thread queue;
(4) Inefficient support for latency-sensitive applications.

Since the Linux scheduler has been successful at avoiding problems 2 to 4, this design attempts to extend it with support for accurate proportional fairness while retaining all of its existing benefits.

User Interface:

By default, each thread is assigned a weight proportional to its static priority. A set of system calls also allow users to specify a weight or reservation for any thread. Weights are relative. For example, for two threads with weights 3 and 1, the scheduler ensures that the ratio of their CPU time is 3:1. Reservations are absolute and in the form of X% of the total CPU time. For example, a reservation of 80% for a thread means that the thread always receives at least 80% of the total CPU time regardless of other threads.

Defualt로 각 thread는 이것의 static priority에 비례해서 weight를 부여받는다A set of system call은 사용자들로 하여금 weight를 specify하거나 어떤 thread의 reseveration(퍼센트를 뜻한다) 을 할 수 있게 한다. Weight는 상대적이다. 예를 들어 두 개의 thread가 3, 1의 weight를 가지면, 이예 비례해서 cpu time을 보장한다. Reservation은 절대적인 값이고, X %의 CPU time의 형태를 갖는다. 예를 들어 80%의 reservation은 그 thread가 다른 thread와 상관 없이 항상 최소한 CPU time의 80%를 얻는다는 것을 의미한다. 

The system calls also support specifying weights or reservations for groups of threads. For example, one can specify an 80% reservation for a group of threads (e.g., a process) to control the total CPU share to which the member threads are collectively entitled.  Within the group, the user can further specify local weights to different threads to control their relative shares.

system call은 또한 사용자로 하여금 thread의 group에 대한 weight나 reservation을  갖을 수 있게 한다. 예를 들어, 어떤 그룹에 포함된 thread들에게 80%의 reservation을 줄 수 있다. 그룹 내에서는, user는 local weight를 주어서 그것들의 상대적인 share를 control할 수 있다. 

Scheduling Algorithm:

The scheduler keeps a set data structures, called Trio groups, to maintain the weight or reservation of each thread group (including one or more threads) and the local weight of each member thread. When scheduling a thread, it consults
these data structures and computes (in constant time) a system-wide weight for the thread that represents an equivalent CPU share. Consequently, the scheduling algorithm, DWRR, operates solely based on the system-wide weight (or weight for short, hereafter) of each thread.

스케줄러는 Trio groups이라는 data structure를 갖고, 이를 이용해서 각 thread group에 weight나 reservation을 유지할 수 있게 하고, 내부적으로는 각 member thread에 local weight를 유지할 수 있게 한다. 어떤 thread를 schedule할 때, 이는 data structure들을
참고해서 CPU share를 나타내는 thread의 system-wide weight를 계산한다. 결과적으로, DWRR은 각 thread의 system-wide weight에만 온전히 의존해서 동작한다. 

For each processor, besides the existing active and expired arrays, DWRR keeps one more array, called round-expired. It also keeps a round number for each processor, initially all zero. A thread is said to be in round R if it is in the active or expired array of a round-R processor. For each thread, DWRR associates it with a round slice, equal to its weight multiplied by a scaling
factor, which controls the total time that the thread can run in any round. When a thread exhausts its time slice, as in the existing scheduler, DWRR moves it to the expired array. However, when it exhausts its round slice, DWRR moves it to the round-expired array, indicating that the thread has finished round R.  In this way, all threads in the active and expired array on a round-R processor are running in round R, while the threads in the round-expired array have finished round R and are awaiting to start round R+1. Threads in the active and expired arrays are scheduled the same way as the existing scheduler. 

각 프로세서에 있어서, 기존의 active, expired array에 반해, DWRR은 하나의 array를 더 갖는다, 이는 round-expired라고 불린다. DWRR은 또한 각 processor에 대한 round number를 갖고 있는다, 초기값은 모두 0이다. 어떤 thread가 round number가 R인 processor의 active 나 expired array에 있다면 그 thread는 round R에 있다고 말한다. 각 thread에 있어서, DWRR은 round slice로 associate한다. (이 값은 weight*scaling factor) 그리고 이 값은 한 round에 그 thread가 run할 수 있는 시간과 같다. 기존의 scheduler는 어떤 thread가 그 time slice를 다 써버렸다면 이를 expired array에 넣는다. 그러나 DWRR는 round slice를 다 써버린  thread를 round-expired array에 넣고, 이는 그 thread가 round R를 끝냈음을 의미한다. 이  방법으로 round-R poricessor에 있는 active와 expired array의 모든 thread는 round R에 run하고, round-expired array에 있는 끝마친 thread들은 round R+1을 시작하기 위해 기다린다.  active와 expired array의 thread들은 기존의 scheduler와 똑같이 동작한다. 

When a processor's active array is empty, as usual, the active and expired arrays are switched. When both active and expired are empty, DWRR eventually wants to switch the active and round-expired arrays, thus advancing the current processor to the next round. However, to guarantee fairness, it needs to maintain the invariant that the rounds of all processors differ by at most
one (when each processor has more than one thread in the run queue). Given this invariant, it can be shown that, during any time interval, the number of rounds that any two threads go through differs by at most one. This property is key to ensuring DWRR's constant error bound compared to the ideal algorithm (formal proofs available upon request).

프로세서의 active array가 비었을 경우, active와 expired array는 switch된다. acitve와 expired 가 모두 비었을 경우 DWRR는 active와 round-expired array를 바꾸고, round를 advance 시킨다. 그러나 fairness를 보장하기 위해서 DWRR는 모든 프로세서의 round가 최대 1만큼 차이가 생기도록 유지해야 한다. (프로세서에 하나 이상의 thread가 있을 경우에) 이 속성으로 어떤 time interval에 다른 thread의 round는 최대 1만큼 차이가 난다. 이 속성이 DWRR의 ideal과 constant error bound를 갖게 한다

To enforce the above invariant, DWRR keeps track of the highest round (referred to as highest) among all processors at any time and ensures that no processor in round highest can advance to round highest+1 (thus updating highest), if there exists at least one thread in the system that is in round highest and not currently running. Specifically, it operates as follows:

위의 invariant를 위해, DWRR는 highest를 track하고, 어떤 프로세서도 highest+1의 round number를 갖지 못하게 한다. 

On any processor p, whenever both the active and expired arrays become empty, DWRR compares the round of p with highest. If equal, it performs idle load balancing in two steps: (1) It Identifies runnable threads that are in round highest but not currently running. Such threads can be in the active or expired array of a round highest processor, or in the round-expired array of a
round highest - 1 processor. (2) Among those threads from step 1, move X of them to the active array of p, where X is a design choice and does not impact the fairness properties of DWRR. If step 1 returns no suitable threads, DWRR proceeds as if the round of processor p is less than highest, in which case DWRR switches p's active and round-expired arrays, and increments p's round by one, thus allowing all threads in its round-expired array to advance to the next round.

 어떤 프로세서 p에 있어서, active와 expired array가 모두 비었다면, DWRR는 p의 round와 highest를 비고한다. 같다면 DWRR는 idle load balancing을 두 가지 방법으로 수행한다.
   (1) 현재 수행되고 있지 않는 round highest의
runnable thread를 확인한다. (이러한  thread는 round highest processor의 active와 expired array에 있거나 round highest-1의 round-expired array에 있다.)
   (2) step1에서 얻는 thread 중에서, X개를 p의 active array로 옮긴다. X는 design choice(현재 소스에서는 절반)이고, DWRR의 fairness에 크게 영향을 주기 않는다. 만약 step1이 아무 결과도 얻지 못한다면 DWRR은 p의 round가 highest보다 작다면 p의 active와 round-expired array를 바꾸고 p의 round를 1만큼 올린다. 이로서 round expired의 모든 thread들이 다음 round로 advance한다. 

Whenever the system creates a new thread or awakens an existing one, DWRR inserts the thread into the active array of an idle processor and sets the processor's round to the current value of highest. If no idle processor exists, it starts the thread on the least loaded processor among those in round highest.

시스템이 새로움 thread를 만들거나, 기존에 있던 것을 깨울 때 마다 DWRR는 그 thread를 idle processor의 active array에 넣고, processor의 round를 highest로 맞춘다. 만약 idle processor가 없다면, DWRR은 round highest에 있는 processor 중 가장 load가 적은 processor에 넣는다. (이 부분이 wake_up_~~() 함수인데, 2.6.24에서 2.6.32로 옮기면서 많이 바뀌었다 -_ㅠ )

Whenever a processor goes idle (i.e., all of its three arrays are empty), DWRR resets its round to zero. Similar to the existing scheduler, DWRR also performs periodic load balancing but only among processors in round highest. Unlike idle load balancing, periodic load balancing only improves performance and is not necessary for fairness.

processor가 idle로 갈 때 마다(processor의 세 array가 모두 비었을 때), DWRR은 그것의 round를 0으로 맞춘다. 기존의 scheduler와 비슷하게, DWRR은 periodic load balancing을 하는데, 이는 round highest에 해당하는 것끼리만 한다. idle load balancing과 달리 
periodic load balancing은 performace만을 improve하고, fairness에 꼭 필요하지는 않는다. 
« PREV : 1 : ··· : 483 : 484 : 485 : 486 : 487 : 488 : 489 : 490 : 491 : ··· : 573 : NEXT »