From: Rafael J. Wysocki Date: Wed, 27 Feb 2019 13:35:50 +0000 (+0100) Subject: cpuidle: menu: Avoid overflows when computing variance X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=814b8797f9863abc2877acf87f6be0f140d00139;p=linux.git cpuidle: menu: Avoid overflows when computing variance The variance computation in get_typical_interval() may overflow if the square of the value of diff exceeds the maximum for the int64_t data type value which basically is the case when it is of the order of UINT_MAX. However, data points so far in the future don't matter for idle state selection anyway, so change the initial threshold value in get_typical_interval() to INT_MAX which will cause more "outlying" data points to be discarded without affecting the selection result. Reported-by: Randy Dunlap Signed-off-by: Rafael J. Wysocki --- diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index 61316fc51548a..5951604e7d5c6 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -186,7 +186,7 @@ static unsigned int get_typical_interval(struct menu_device *data, unsigned int min, max, thresh, avg; uint64_t sum, variance; - thresh = UINT_MAX; /* Discard outliers above this value */ + thresh = INT_MAX; /* Discard outliers above this value */ again: