Stochastic scheduling

[citation needed] The objective of the stochastic scheduling problems can be regular objectives such as minimizing the total flowtime, the makespan, or the total tardiness cost of missing the due dates; or can be irregular objectives such as minimizing both earliness and tardiness costs of completing the jobs, or the total cost of scheduling tasks under likely arrival of a disastrous event such as a severe typhoon.

These three types are usually under the assumption that complete information is available in the sense that the probability distributions of the random variables involved are known in advance.

The Bayesian method has been applied to treat stochastic scheduling problems with incomplete information.

jobs with random process times, whose distributions are known, have to be completed by a set of

Job processing times are independent random variables with a general distribution

Admissible policies must be nonanticipative (scheduling decisions are based on the system's history up to and including the present time) and nonpreemptive (processing of a job must proceed uninterruptedly to completion once started).

denote the cost rate incurred per unit time in the system for job

The optimal solution in the special deterministic case is given by the Shortest Weighted Processing Time rule of Smith:[3] sequence jobs in nonincreasing order of the priority index

[4] In general, the rule that assigns higher priority to jobs with shorter expected processing time is optimal for the flowtime objective under the following assumptions: when all the job processing time distributions are exponential;[5] when all the jobs have a common general processing time distribution with a nondecreasing hazard rate function;[6] and when job processing time distributions are stochastically ordered.

[7] Multi-armed bandit models form a particular type of optimal resource allocation (usually working with time assignment), in which a number of machines or processors are to be allocated to serve a set of competing projects (termed as arms).

In the typical framework, the system consists of a single machine and a set of stochastically independent projects, which will contribute random rewards continuously or at certain discrete time points, when they are served.

The objective is to maximize the expected total discounted rewards over all dynamically revisable policies.

[1] The first version of multi-bandit problems was formulated in the area of sequential designs by Robbins (1952).

[13] Other extensions include the models of restless bandit, formulated by Whittle (1988),[14] in which each arm evolves restlessly according to two different mechanisms (idle fashion and busy fashion), and the models with switching costs/delays by Van Oyen et al. (1992),[15] who showed that no index policy is optimal when switching between arms incurs costs/delays.

Models in this class are concerned with the problems of designing optimal service disciplines in queueing systems, where the jobs to be completed arrive at random epochs over time, instead of being available at the start.

The simplest types of MQNs involve scheduling a number of job classes in a single server.

More general MQN models involve features such as changeover times for changing service from one job class to another (Levy and Sidi, 1990),[16] or multiple processing stations, which provide service to corresponding nonoverlapping subsets of job classes.

Due to the intractability of such models, researchers have aimed to design relatively simple heuristic policies which achieve a performance close to optimal.

The majority of studies on stochastic scheduling models have largely been established based on the assumption of complete information, in the sense that the probability distributions of the random variables involved, such as the processing times and the machine up/downtimes, are completely specified a priori.

Examples of scheduling with incomplete information can be found in environmental clean-up,[17] project management,[18] petroleum exploration,[19] sensor scheduling in mobile robots,[20] and cycle time modeling,[21] among many others.

As a result of incomplete information, there may be multiple competing distributions to model the random variables of interest.

An effective approach is developed by Cai et al. (2009),[22] to tackle this problem, based on Bayesian information update.

When the scheduling policy is static in the sense that it does not change over time, optimal sequences are identified to minimize the expected discounted reward and stochastically minimize the number of tardy jobs under a common exponential due date.