Towards Low-Latency Batched Stream Processing by Pre-Scheduling

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Towards Low-Latency Batched Stream Processing by Pre-Scheduling. / Jin, Hai; Chen, Fei; Wu, Song; Yao, Yin; Liu, Zhiyi; Gu, Lin; Zhou, Yongluan.

In: IEEE Transactions on Parallel and Distributed Systems, Vol. 30, No. 3, 8444732, 2019, p. 710-722.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Jin, H, Chen, F, Wu, S, Yao, Y, Liu, Z, Gu, L & Zhou, Y 2019, 'Towards Low-Latency Batched Stream Processing by Pre-Scheduling', IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 3, 8444732, pp. 710-722. https://doi.org/10.1109/TPDS.2018.2866581

APA

Jin, H., Chen, F., Wu, S., Yao, Y., Liu, Z., Gu, L., & Zhou, Y. (2019). Towards Low-Latency Batched Stream Processing by Pre-Scheduling. IEEE Transactions on Parallel and Distributed Systems, 30(3), 710-722. [8444732]. https://doi.org/10.1109/TPDS.2018.2866581

Vancouver

Jin H, Chen F, Wu S, Yao Y, Liu Z, Gu L et al. Towards Low-Latency Batched Stream Processing by Pre-Scheduling. IEEE Transactions on Parallel and Distributed Systems. 2019;30(3):710-722. 8444732. https://doi.org/10.1109/TPDS.2018.2866581

Author

Jin, Hai ; Chen, Fei ; Wu, Song ; Yao, Yin ; Liu, Zhiyi ; Gu, Lin ; Zhou, Yongluan. / Towards Low-Latency Batched Stream Processing by Pre-Scheduling. In: IEEE Transactions on Parallel and Distributed Systems. 2019 ; Vol. 30, No. 3. pp. 710-722.

Bibtex

@article{6367b584385c455798d69f05add181e2,
title = "Towards Low-Latency Batched Stream Processing by Pre-Scheduling",
abstract = "Many stream processing frameworks have been developed to meet the requirements of real-time processing. Among them, batched stream processing frameworks are widely advocated with the consideration of their fault-tolerance and high throughput. In batched stream processing frameworks, straggler, happened due to the uneven task execution time, has been regarded as a major hurdle of latency-sensitive applications. Existing straggler mitigation techniques, operating in either reactive or proactive manner, are all post-scheduling methods, and therefore inevitably result in high resource overhead or long job completion time. We notice that batched stream processing jobs are usually recurring with predictable characteristics. By exploring such a feature, we present a pre-scheduling straggler mitigation framework called Lever. Lever first identifies potential stragglers and evaluates nodes capacity by analyzing execution information of historical jobs. Then, Lever carefully pre-schedules job input data to each node before task scheduling so as to mitigate potential stragglers. We implement Lever and contribute it as an extension of Apache Spark Streaming. Our experimental results show that Lever can reduce job completion time by 30.72{\%} to 42.19{\%} over Spark Streaming, a widely adopted batched stream processing system and outperforms traditional techniques significantly.",
keywords = "data assignment, recurring jobs, scheduling, straggler, stream processing",
author = "Hai Jin and Fei Chen and Song Wu and Yin Yao and Zhiyi Liu and Lin Gu and Yongluan Zhou",
year = "2019",
doi = "10.1109/TPDS.2018.2866581",
language = "English",
volume = "30",
pages = "710--722",
journal = "IEEE Transactions on Parallel and Distributed Systems",
issn = "1045-9219",
publisher = "IEEE Computer Society Press",
number = "3",

}

RIS

TY - JOUR

T1 - Towards Low-Latency Batched Stream Processing by Pre-Scheduling

AU - Jin, Hai

AU - Chen, Fei

AU - Wu, Song

AU - Yao, Yin

AU - Liu, Zhiyi

AU - Gu, Lin

AU - Zhou, Yongluan

PY - 2019

Y1 - 2019

N2 - Many stream processing frameworks have been developed to meet the requirements of real-time processing. Among them, batched stream processing frameworks are widely advocated with the consideration of their fault-tolerance and high throughput. In batched stream processing frameworks, straggler, happened due to the uneven task execution time, has been regarded as a major hurdle of latency-sensitive applications. Existing straggler mitigation techniques, operating in either reactive or proactive manner, are all post-scheduling methods, and therefore inevitably result in high resource overhead or long job completion time. We notice that batched stream processing jobs are usually recurring with predictable characteristics. By exploring such a feature, we present a pre-scheduling straggler mitigation framework called Lever. Lever first identifies potential stragglers and evaluates nodes capacity by analyzing execution information of historical jobs. Then, Lever carefully pre-schedules job input data to each node before task scheduling so as to mitigate potential stragglers. We implement Lever and contribute it as an extension of Apache Spark Streaming. Our experimental results show that Lever can reduce job completion time by 30.72% to 42.19% over Spark Streaming, a widely adopted batched stream processing system and outperforms traditional techniques significantly.

AB - Many stream processing frameworks have been developed to meet the requirements of real-time processing. Among them, batched stream processing frameworks are widely advocated with the consideration of their fault-tolerance and high throughput. In batched stream processing frameworks, straggler, happened due to the uneven task execution time, has been regarded as a major hurdle of latency-sensitive applications. Existing straggler mitigation techniques, operating in either reactive or proactive manner, are all post-scheduling methods, and therefore inevitably result in high resource overhead or long job completion time. We notice that batched stream processing jobs are usually recurring with predictable characteristics. By exploring such a feature, we present a pre-scheduling straggler mitigation framework called Lever. Lever first identifies potential stragglers and evaluates nodes capacity by analyzing execution information of historical jobs. Then, Lever carefully pre-schedules job input data to each node before task scheduling so as to mitigate potential stragglers. We implement Lever and contribute it as an extension of Apache Spark Streaming. Our experimental results show that Lever can reduce job completion time by 30.72% to 42.19% over Spark Streaming, a widely adopted batched stream processing system and outperforms traditional techniques significantly.

KW - data assignment

KW - recurring jobs

KW - scheduling

KW - straggler

KW - stream processing

UR - http://www.scopus.com/inward/record.url?scp=85052709429&partnerID=8YFLogxK

U2 - 10.1109/TPDS.2018.2866581

DO - 10.1109/TPDS.2018.2866581

M3 - Journal article

VL - 30

SP - 710

EP - 722

JO - IEEE Transactions on Parallel and Distributed Systems

JF - IEEE Transactions on Parallel and Distributed Systems

SN - 1045-9219

IS - 3

M1 - 8444732

ER -

ID: 203670980