This page has only limited features, please log in for full access.

Ms. Carolyn Swinney
University of Essex

Basic Info


Research Keywords & Expertise

0 Deep Learning
0 Signal Detecting and Processing
0 Pattern analysis and machine learning
0 Convolutional neural networks (CNN)
0 Unmanned aerial vehicle (UAV)

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Carolyn J. Swinney received a B.Eng.(hons.) degree (first class) in 2007 and a M.Sc.(dist.) in Electronics Engineering from the University of Essex, Colchester, UK in 2013. She graduated as a Communications and Electronics Engineering Officer in the Royal Air Force in 2014. She currently works within the Air and Space Warfare Centre and is working towards a Ph.D. degree in Electronic Systems Engineering at the University of Essex, Colchester, UK. Her main research interests are signal processing, autonomous vehicles, machine learning and cyber security.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 01 July 2021 in Aerospace
Reads 0
Downloads 0

Small unmanned aerial systems (UASs) present many potential solutions and enhancements to industry today but equally pose a significant security challenge. We only need to look at the levels of disruption caused by UASs at airports in recent years. The accuracy of UAS detection and classification systems based on radio frequency (RF) signals can be hindered by other interfering signals present in the same frequency band, such as Bluetooth and Wi-Fi devices. In this paper, we evaluate the effect of real-world interference from Bluetooth and Wi-Fi signals concurrently on convolutional neural network (CNN) feature extraction and machine learning classification of UASs. We assess multiple UASs that operate using different transmission systems: Wi-Fi, Lightbridge 2.0, OcuSync 1.0, OcuSync 2.0 and the recently released OcuSync 3.0. We consider 7 popular UASs, evaluating 2 class UAS detection, 8 class UAS type classification and 21 class UAS flight mode classification. Our results show that the process of CNN feature extraction using transfer learning and machine learning classification is fairly robust in the presence of real-world interference. We also show that UASs that are operating using the same transmission system can be distinguished. In the presence of interference from both Bluetooth and Wi-Fi signals, our results show 100% accuracy for UAV detection (2 classes), 98.1% (+/−0.4%) for UAV type classification (8 classes) and 95.4% (+/−0.3%) for UAV flight mode classification (21 classes).

ACS Style

Carolyn Swinney; John Woods. The Effect of Real-World Interference on CNN Feature Extraction and Machine Learning Classification of Unmanned Aerial Systems. Aerospace 2021, 8, 179 .

AMA Style

Carolyn Swinney, John Woods. The Effect of Real-World Interference on CNN Feature Extraction and Machine Learning Classification of Unmanned Aerial Systems. Aerospace. 2021; 8 (7):179.

Chicago/Turabian Style

Carolyn Swinney; John Woods. 2021. "The Effect of Real-World Interference on CNN Feature Extraction and Machine Learning Classification of Unmanned Aerial Systems." Aerospace 8, no. 7: 179.

Journal article
Published: 16 March 2021 in Aerospace
Reads 0
Downloads 0

Unmanned Aerial Vehicles (UAVs) undoubtedly pose many security challenges. We need only look to the December 2018 Gatwick Airport incident for an example of the disruption UAVs can cause. In total, 1000 flights were grounded for 36 h over the Christmas period which was estimated to cost over 50 million pounds. In this paper, we introduce a novel approach which considers UAV detection as an imagery classification problem. We consider signal representations Power Spectral Density (PSD); Spectrogram, Histogram and raw IQ constellation as graphical images presented to a deep Convolution Neural Network (CNN) ResNet50 for feature extraction. Pre-trained on ImageNet, transfer learning is utilised to mitigate the requirement for a large signal dataset. We evaluate performance through machine learning classifier Logistic Regression. Three popular UAVs are classified in different modes; switched on; hovering; flying; flying with video; and no UAV present, creating a total of 10 classes. Our results, validated with 5-fold cross validation and an independent dataset, show PSD representation to produce over 91% accuracy for 10 classifications. Our paper treats UAV detection as an imagery classification problem by presenting signal representations as images to a ResNet50, utilising the benefits of transfer learning and outperforming previous work in the field.

ACS Style

Carolyn Swinney; John Woods. Unmanned Aerial Vehicle Operating Mode Classification Using Deep Residual Learning Feature Extraction. Aerospace 2021, 8, 79 .

AMA Style

Carolyn Swinney, John Woods. Unmanned Aerial Vehicle Operating Mode Classification Using Deep Residual Learning Feature Extraction. Aerospace. 2021; 8 (3):79.

Chicago/Turabian Style

Carolyn Swinney; John Woods. 2021. "Unmanned Aerial Vehicle Operating Mode Classification Using Deep Residual Learning Feature Extraction." Aerospace 8, no. 3: 79.

Conference paper
Published: 29 December 2020 in 2020 16th International Computer Engineering Conference (ICENCO)
Reads 0
Downloads 0

Unmanned Aerial Vehicles (UAVs) are changing the way major industries conduct business in a globally friendly and efficient manner. Along with these economic benefits come major security challenges. Gatwick Airport is one example that cost £50 million. 1000 flights were cancelled over a 36hr period in 2018. Radio Frequency (RF) fingerprinting is an approach for UAV detection and classification associated with longer detection distances. Classification of a UAV flight mode would provide police commanders with information to assist risk assessment. For example, intelligence operations could be associated with a UAV flying while transmitting video. In this paper we introduce RF fingerprinting using Power Spectral Density (PSD) and spectrograms. This work utilises the open DroneRF dataset, made up of signals from the Parrot Bebop, Parrot AR and DJI Phantom 3 and further broken down into flight modes including switched on; hovering; flying with and without video transmission. Signal representations for each class are treated as images and transfer learning is implemented through the extraction of features using a VGG16 Convolutional Neural Network (CNN) pre-trained on the ImageNet database. Machine learning classifiers Logistic Regression (LR), Support Vector Machine and Random Forest are evaluated in these experiments for 2 classes (UAV present or not), 4 classes (UAV type) and 10 classifications (UAV flight modes). We show that PSD representation has higher accuracy than the use of time domain spectrograms, producing 100% accuracy for UAV detection and 88.6% accuracy for UAV type classification with LR. For 10 classification including flight modes, LR with PSD produced 87.3% accuracy which is a 40% increase on prior work in the field by A1-Sa'd et al.. Overall an approach has been introduced using transfer learning through CNN feature extraction and machine learning classification which performs with high accuracy compared with previous work on the same DroneRF dataset.

ACS Style

Carolyn J. Swinney; John C. Woods. Unmanned Aerial Vehicle Flight Mode Classification using Convolutional Neural Network and Transfer Learning. 2020 16th International Computer Engineering Conference (ICENCO) 2020, 83 -87.

AMA Style

Carolyn J. Swinney, John C. Woods. Unmanned Aerial Vehicle Flight Mode Classification using Convolutional Neural Network and Transfer Learning. 2020 16th International Computer Engineering Conference (ICENCO). 2020; ():83-87.

Chicago/Turabian Style

Carolyn J. Swinney; John C. Woods. 2020. "Unmanned Aerial Vehicle Flight Mode Classification using Convolutional Neural Network and Transfer Learning." 2020 16th International Computer Engineering Conference (ICENCO) , no. : 83-87.