Strings are sequences of characters. However, what constitutes a character depends greatly on the language being used and the settings of the operating system on which the application runs. Gone are the days when a string was just a set of bytes, with each byte representing a character from the ASCII character encoding. Multibyte encodings (either fixed length or variable length) are needed to accurately store text in today's global economy.
With that said, most interview problems will avoid variable-length character encodings to simplify matters. The individual characters will be referred to as characters or bytes depending mostly on the language being used: Languages such as Java and C# have a built-in Unicode character type, whereas C/C++ does not. In general, most programming examples involving strings will use the natural character type for the language in question.
If you have specific experience with internationalization and localization, don't hesitate to point this out during the interview. You can explain what would have to be done differently to handle a variable-length character encoding, for example, even as you code the solution to only work (as the interviewer requested) with a single-byte character encoding such as ASCII.
No matter how they're encoded, most languages store strings internally as arrays, even if they differ greatly in how they treat arrays and strings. As before, we'll look at each language separately.
A C string is nothing ...