January 2018
Intermediate to advanced
374 pages
9h 53m
English
Now we create another, rather stupid, transform_func that takes more time depending on the input value, and a range of increasing values using std::iota() like this:
const auto transform_func = [](float v) { auto sum = v; auto i_max = v / 100'000; // The larger "v" is, the more to compute for (size_t i = 0; i < i_max; ++i) { sum += (i*i*i*sum); } return sum;};auto n = size_t{ 10'000'000 };auto src = std::vector<float>(n);std::iota(src.begin(), src.end(), 0.0f); // "src" goes from 0 to n
If we evaluate them with different chunk sizes, as well as std::transform() and the old par_transform_naive(), we get the following computation times:
|
Function |
Chunk size |
Number of tasks |
Microseconds |
Speed up |
|
std::transform() ... |