Recent studies have shown that handwritten characters can be distinguished from each other with a high accuracy leading to security threats such as impersonation, side-channel attacks or just building systems to mirror handwritten characters to digital space.

Most of these studies just focused on the character recording and building (complex) systems around the classification of these handwritten characters, resulting in sparse data sets with only specialized hardware in restricted settings.

With these specialized settings and hardware, it’s not clear what limitations might impact the accuracy of classification, let it be the type of sensor of the general writing style of a person and if these researches also apply to consumer hardware or general settings like writing with a simple pen on paper.

The results of this work aim to set clear limitations and settings for the recording of handwritten characters while using a simple pen and paper setting with multiple consumer devices.

Sampling a data set full of handwritten lower-case characters with the usage of multiple consumer wearables in different positions on the forearm, while limiting the speed and size of a character drawn, are processed and calculated into several time-domain and frequency-domain features to be classified by different machine learning methods resulting in accuracies of 20 % to 22 % for the IMU data, 15 % to 17 % for the EMG data and 16 % to 20 % for a mixed approach.

The results are in the range of current state-of-the-art findings adjusted for the size of classifiers used, so the defined limitations in this work might give a direction to which limitations are more useful in the scenario of classifying characters based on signal data using consumer devices.