When we execute Apache Spark code from .NET, we are calling methods and classes in the Java Virtual Machine, and Apache Spark reads, writes, aggregates, and transforms our data, according to our requirements. It is entirely possible and quite common that the .NET application never has any sight of the actual data, and the JVM handles all of the data modifications. This is fine if Apache Spark has all of the classes and methods you need to complete your processing. However, what do we do when we need to do something that isn’t supported by Apache Spark? The answer is User-Defined Functions (UDFs) ...
4. User-Defined Functions
Get Introducing .NET for Apache Spark: Distributed Processing for Massive Datasets now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.