Recently Defended Phd Thesis





(Surname First In CAPS)






OJO, AdebolaKehinde 

Intelligent Text Mining Model for Characterisation and Trend Analysis in Academic Journal Publications

Dr A. B. Adeyemo




WOODS, Nancy Chinyere

Low-Level Multimedia Recognition and Classification for Intelligence and Forensic Analysis

Dr A. B. C. Robert




FASOLA, OlusanjoOlugbemi

Improving Vision Impaired Users Access to Electronic Resources in E-Learning Environment with Modified Artificial Neural Network

Dr A. B. C. Robert






Multi-level Pattern Prediction of Unstructured Financial Data Using Extended Hidden Markov Decision Model

Dr A. B. C. Robert




ADENIJI, Oluwashola David 

Route Optimisation in Mobile IPV6 Wireless Networks for Improved Network Mobility

Prof. Adenike Osofisan



NAME:                       Oluwashola David ADENIJI

MATRIC. No.:          169093




Route Optimization (RO) in Mobile Internet Protocol Version Six (MIPv6) is a technique that enables a Mobile Node (MN) and a Corresponding Node (CN) to communicate directly by bypassing the Home agent (HA). The RO is usually faced with the problem of Internet Protocol (IP) tunnels due to pinball or sub-optimal routing. Enhancement of RO protocol is necessary to minimise multilayer tunnels. In this work, an Enhanced RO Protocol (EROP) in MIPv6 Wireless Network for Network Mobility (NEMO) was developed to reduce the problem of multilayer tunnel.


Ubuntu Linux Operating System was used to create and develop the entities in the RO wireless test-bed which are MIPv6 platform for Linux (MIPL) and NEMO platform for Linux (NEPL). The operation of the test-bed was divided into three phases. In Phase 1: MN was at its Home Network with mobility management of MIPv6being started with HA, MN, CN. Phase 2: Mobile Router (MR) was at its Home Network with mobility management of NEMO in MIPv6 being started with MR and HA. Phase 3: CNand Corresponding Router (CR)mobility management were setup.The combination of phases 1 and 3 completes the network configuration of EROP, while the combination of phases 1, 2 and 3 completes the network configuration for NEMO.Break-Before-Make (BBM) and Make-Before-Break (MBB) handoffs were used to analyze packet loss for Transmission Control Protocol (TCP) in two tests. Ping6 and Wireshark were used to evaluate the handoff latency. Netperf was used for analysis of the throughput between the protocols. Packet delay (latency) was used in evaluating the performance of the two protocols. The experiment was carried out twenty times to measure the Random Trip Time (RTT) for EROP and NEMO using ping6 program with a packet size of 56 bytes.


The EROP packet delay was 0.326ms while NEMO’s was 0.603ms without RO. The resulting packet delay showed that EROP reduced the handoff latency by 45.97% at optimal performance. In a similar experiment, when RO was enabled, the EROP packet delay was 0.322ms while NEMO’s was 0.372ms . This showed that EROP reduced the handoff latency by 13.4% RO. The average NEMO packet delay was 0.305ms as against EROP’s 0.280ms. On average, EROP provided 8.2% reduction in handoff latency with RO. With wireshark, NEMO average latency was 4.10s while EROP was 3.57s. The average total handoff latency of MIPv6 was 3.83s. The TCP test1 showedeffects of packet loss at 11.2s with TCP sequence number 150,000,000 whereas TCP test2 reduced the packet loss to 3.4s with TCP sequence number 34,000,000. The TCPIPv6 stream test1 using Netperf showed that EROP throughput was 13869.77 Mbps as against NEMO with 13912.63.Mbps for duration of 60 s.


The enhanced route optimization protocol reduces the handoff latency for network mobility protocol. This enhanced route optimization protocol can be deployed in video streaming that requires low handoff latency for computing activities.

Keywords:Mobile IPv6, Network mobility, Sub-optimality, Handoff latency.

Word Count: 485



NAME:                      OlusanjoOlugbemiFASOLA

MATRIC. No.:          140505



Assistive Technology (ATs) provide means through which persons with visual impairment are empowered with adaptive devices and methods for accessing multimedia information. However, the degree of sensitivity and specificity values for access to electronic resources by visual impaired persons varies.  Existing ATs were designed as “one model fits all” (static calibration requirements), thereby limiting the usability by vision impaired users in an e-learning environment.  The study presents a Dynamic Thresholding Model (DTM) that adaptively adjusts the vision parameters to meet the calibration requirements of vision impaired users.

Data from International Statistical Classification of Diseases and Related Health Problems of World Health Organisation (WHO) containing 1001 instances of visual impairment measures were obtained from 2008 to 2013. The users’ vision parameters of WHO for Visual Acuity Range (VAR) were adopted. These were: VAR ≥ 0.3(299); 0.1 < VAR < 0.3(182); 0.07 ≤ VAR < 0.1(364); 0.05 ≤ VAR < 0.07(120); 0.02 ≤ VAR < 0.05(24); and VAR < 0.02(12).  Data for six VAR groups were partitioned into 70% (700) and 30% (301) for training and testing, respectively.  Data for the six groups were transformed into 3-bits encoding to facilitate model derivation. The DTM was developed with calibrator parameters (Visual Acuity (Va), Print Size (Ps) and Reading Rate (Rr)) for low acuity, adaptive vision calibrator and dynamic thresholding. The VAR from the developed DTM was used to predict the optimal operating range and accuracy value on observed WHO dataset irrespective of the grouping.  Six-epochs were conducted for each thresholding value to determine the sensitivity and specificity values relative to the False Negative Rate (FNR) and False Positive Rate (FPR), respectively, which are evidences of misclassification.

The 3-bit encoding coupled with the DTM yielded optimised equations of the form:

Where OP1, OP2 and OP3 represent the first, second and third bit, respectively.  Five local maxima accuracy and one global maximum threshold values were obtained from the DTM. Local maxima threshold values were 0.455, 0.470, 0.515, 0.530, and 0.580, with corresponding percentage accuracy of 99.257, 99.343, 99.171, 99.229, and 99.429. Global maximum accuracy was 99.6 at threshold value of 0.5. The Va, Ps, and Rr produced equal numbers of observations (301) agreeing with the result in WHO report. Correctly classified user impairment was 99.89%, with error rate of 0.11%. The model predicted sensitivity value of 99.79% (0.21 FNR), and specificity value of 99.52% (0.48 FPR).

The developed dynamic thresholding model adaptively classified various degrees of visual impairment for vision impaired users.

Keywords: Visual acuity, Visual print size, Assistive technology, Vision impaired reading rate

Word count: 410



NAME:                      Nancy Chinyere WOODS

MATRIC. No.:          63509




Digital images arebecoming a common place both in professional and private lives especially with increase in digital devices and associated images on the Internet.  Organisations and individuals have large collections of patented images that are cumbersome to explicitly identify due to change in identity once accessed and stored away from original source. Identification, qualification and subsequent retrieval have been achieved with edge detection, manual identification and annotation whichfailed toresolve the problems associatedwith legal issues onexact qualification of images. To encapsulate metadata in images for effective qualification and retrieval in forensic analysis, an automatic pixel locator, colour alignment and identificationmodel was developed.


A model was developed and implemented using Java programming language to generate complete colour samples in the Red-Green-Blue (RGB) colour model.Image dataset containing 851Computer GeneratedImages (CGI) and 1,099 NaturalImages (NI) were obtainedfrom the Internet and personal collections.Seventy percent (1,365) of the images were used to train object recognition algorithm using the histograms of oriented gradients features. Thirty percent (585)of the images were used for testing.An algorithm was developed to locate a pixel within a recognised object to enable the identification and verification of the object’s predominant colour based on the RGB colour model.  Descriptivedata weregenerated and stored as metadata information. A pixel locator and colour alignment algorithm was developed to emphasise colours in CGI and NI from a point of reference. Four groups were selected, using the algorithm, consisting of exact RGB code (group 1), colour code in range of 10 (group 2), colour code in range of 20 (group 3) and colour code in range of 30 (group 4) from the point of reference.


A total of 16,777,216 colour samples, including their visual presentations and RGB codes,werecreated and stored in a colour database.  The algorithmrecognised and classified objects in test images.  The predominant colour of the recognised object was identified with99.88% accuracy and verified.  The descriptive metadata were recognised and automatically embedded into the Exif section of the image header of 127 NI selected from the image dataset.  The resulting metadata used for retrieval purposes was mobile, searchable and non-obstructive. The optimised equation from the four Colour Code Groups (CCG) is:




where P is proportion emphasised, i = 1 – 4, j = 4 and K is a constant.The CGIpixel colour highlightemphasised an average of 69.8%, 92.9%, 96.9% and 98.6%, of anycolour code for groups 1, 2, 3 and 4,respectively. The projected colours for NIwere 31.1%, 82.6%, 90.8% and 95.0% for groups 1, 2, 3 and 4,respectively.TheseNI contained wide range of RGB colour codes for a particularcolour that has similar colour presentation.


The developed model adequately identified visible objects in images with their colour classified alongside the embedded metadata.These metadata enabled the authentication and retrieval of patented and watermarked images.

Keywords:Automatic image annotation, Colour identification, Natural images, Computer generated images

Word count: 487



NAME:                      Adebola Kehinde OJO 

MATRIC. No.:          118672




Text Mining is the process of analysing collections of textual materials in order to capture the key concepts, themes, uncover hidden relationships and trends. The ever-growing volume of published academic journals and the implicit knowledge that can be derived from them has not fully enhanced knowledge development but rather resulted into information and cognitive overload. However, publication data are textual, unstructured and anomalous.Analysing such high dimensional data manually is time consuming and this has limited the ability to make projections and trends derivable from the patterns hidden in various publications. This study was designed to develop and use intelligent text mining techniques to elicit publication trends and characterise academic journal publications over time.

Journals Scoring Criteria (JSC) by nineteen rankers from 2001 to 2013 was used as criteria for selecting journals. Online Highly Rated Journals (HRJ) and Non-Highly Rated Journals (NHRJ) comprising of 1,149 and 1,229 issues,respectively from 1926 to 2013 were accessed. Simple random sampling of 10% of HRJ (115) and NHRJ (123) issues,comprising of 1,189 and 857 articles were analysed. The abstracts from the Institute of Electrical and Electronics Engineers (IEEE) journal were used to determine the trends in Computer Science from 1997 to 2016. A text-miner software was developed using Python to crawl and download the abstracts of papers and their bibliometric information from the articles selected from HRJ, NHRJ and IEEE. The datasets were transformed into structured data and cleaned using filtering and stemming algorithms. Thereafter, the data were grouped into series of word features based on bag of words document representation. The HRJ and NHRJ were clustered using Self-Organising Maps (SOM) method with attribute weights in each cluster.

The text-miner software was developed and successively used to discover trend in the selected journal articles. A total of 5,862(HRJ), 4,803(NHRJ) and 131,554(IEEE)word features were found by the crawler. In HRJ, seven clusters were generated with authors from Universities dominating all clusters having weights: C1:0.44; C2:0.33; C3:0.38; C4:0.33; C5:0.21; C6:0.05; C7:0.37.These authors were from North America (C1:0.44; C2:0.16; C3:0.35; C4:0.30; C5:0.23; C6:0.02; C7:0.01). Four clusters were generated for authors’ designation(C1:0.01; C3:0.03; C4:0.25; C5:0.22). Seven clusters were generatedbased on subject-areas. Management was found to be dominant (C1:0.03; C2:0.30; C3:0.09; C4:0.29; C5:0.01; C6:0.01; C7:0.18). In NHRJ, seven clusters were generated for authors from Universities: C1:0.04; C2:0.08; C3:0.05: C4:0.18; C5:0.11; C6:0.13; C7:0.02, all whom were from Asia. Publications by subject area resulted into six clusters with Economics being predominant: C1:0.23; C2: 0.02; C3:0.01; C4:0.03; C5:0.01; C6:0.01.  The trend analysis on IEEE journal abstracts over two decades showed that the publication trends have changed tremendously from communication and security to artificial intelligence.

The text-miner developed was able to effectively automate, characterise and discover trends in academic journal publications.

Keywords: Online rated journals, Self-organising map, Filtering and stemming algorithms, Bag of words

Word count: 457