Financial markets are the granddaddy of all time series data. If you pay for proprietary trading data on a high-tech exchange, you can receive terabyte-sized floods of data that can take days to process, even with high-performance computing and embarrassingly parallel processing.
High-frequency traders are among the newest and most infamous members of the finance community, and they trade on information and insights resulting from time series analysis at the microsecond level. On the other hand, traditional trading firms—looking at longer-term time series over hours, days, or even months—continue to succeed in the markets, showing that time series analysis for financial data can be conducted in a myriad of successful ways and at timescales spanning many orders of magnitude, from milliseconds to months.
Embarrassingly parallel describes data processing tasks where the results of processing one segment of data are in no way dependent on the values of another segment of data. In such cases it’s embarrassingly easy to convert data analysis tasks to run in parallel rather than in sequence to take advantage of multicore or multimachine computing options.
Consider, for example, the task of computing the daily mean of minute-by-minute returns on a given stock. Each day can be treated separately and in parallel. In contrast, computing the exponentially weighted moving average of daily volatility is not embarrassingly parallel because the value for ...