Here, we can directly reuse the list of generic English stopwords provided by Spark. However, we can enrich it by our specific stopwords:
import org.apache.spark.ml.feature.StopWordsRemover val stopWords= StopWordsRemover.loadDefaultStopWords("english") ++ Array("ax", "arent", "re")
As stated earlier, this is an extremely delicate task and highly dependent on the business problem you are looking to solve. You may wish to add to this list terms that are relevant to your domain that will not help the prediction task.
Declare a tokenizer that tokenizes reviews and omits all stopwords and words that are too short:
val MIN_TOKEN_LENGTH = 3val toTokens= (minTokenLen: Int, stopWords: Array[String], review: String) => ...