This research tackles the problem of automating the rating of a guitar tablature's difficulty level. Building upon recent research, this project proposes several difficulty features and investigates their influence on a set of prerated set of pieces being used by an existing standard of music levels. The difficulty features will be ranked according to their influence on a music school's leveling criteria. Models for automatically rating tablature were built around these experiments with the goal of a web application that provides the difficulty level of a tablature, with hopes of improving the ambiguous format that guitar tablatures present. The linear regression model chosen had an r-squared metric of 23.57% and was implemented in a tablature repository website.
Abstract The Protector of the Sea is a gesture-based game with a concept of environmental awareness that examines each user’s gestures to qualify for collision detection. The system focuses on assessing the accuracy of the gesture recognition through testing methods, providing questionnaire for an overall feedback with a rating of 4 being Strongly Agree and 1 being Strong Disagree, and calculating paired T-distribution statistics for the learning of the system through a questionnaire given to the same set of users before and after the game. Although there are existing approaches for gesture recognition that provides a relatively accurate feedback for the gestures, none had utilized collision detection for recognition. Usage of the system shows that it was an interesting and knowledgeable for the users. Alongside, it also provides entertainment and uniqueness to the users. The researchers see this study as significant to those who would like to explore gesture recognition using motion capture devices using a similar approach in the future. Forty-five (45) users from grade levels 7, 9, 10 and Alternative Learning Students (ALS) participated in the study, each of them played the game. These users’ gestures were assessed and evaluated all throughout the game. Seven different gestures were characterized from the system, namely, Catch Right, Catch Middle, Catch Left, Catch Right Forward, Catch Middle Forward, Catch Left Forward and Unload. Based on the results of the testing, the system performed 85.73% accuracy. The overall rating of the system produced was 84%. The calculated result t of -5.6552, and the critical value, 2.015 of the statistical test showed that the means of the before and after data has a significant difference in the existing knowledge of the users’ before and after the game was presented. Keywords: collision detection, environmental awareness, gesture, recognition, 3d motion capture, technology, Kinect
RATIONALE / SIGNIFICANCE Mosquito-borne diseases such as dengue fever, malaria, and chikungunya has always been one of the leading causes of morbidity in the Philippines . In fact, the Global Dengue Initiative identifies dengue fever as one of the national notifiable diseases in any country . More than that, the country’s Department of Health created priority control prevention programs for the aforementioned diseases . Despite these measures, the country still continues to face outbreaks of these mosquito-borne diseases. In fact, in 2014, it was reported that there is a surge in mosquito population . To prevent outbreaks of dengue, effective vector control measures should be in place. A report from the Asia-Pacific and Americas Dengue Prevention Boards has identified that an initial step to combat dengue is to improve surveillance systems . A particular aspect of such a system requires enhanced mosquito-vector surveillance. Despite these suggestions, only a handful of research endeavors are currently implementing such schemes properly integrated to their mosquito-borne diseases surveillance systems . In the Philippines an initiative by the Department of Science and Technology focused on ovitraps supplemented with manual landing rate counts, these are still in their infancies and are rather insufficient . While several literatures have established the importance of entomological surveillance in supplementing disease surveillance and response, only pupal and adult vector counts are considered reliable because of their high correlation with actual disease cases. Moreover, many studies have identified that ovicyte and larval indices offer little value with respect to surveillance because of the low survival rates of eggs and larvae . Thus, this project proposes a cost-effective tool that is able to automatically collect, identify, and count adult mosquitoes. The automation feature of the tool allows data collection with minimal human intervention and is suitable even at remote areas where resources are limited. It provides a solution for generating reliable entomological indices which in turn, strengthens the disease surveillance system.
INSTRUCTIONS Answer the Exercises below. Create a separate file (source code) for every exercise and name it according to the exercise number (e.g. Exercise 2-1.c). The answers to the advanced questions can be included as a separate function in the original file for the exercise it belongs to. Zip all the files and submit it via Google Classroom. Naming Convention for the zip file: [Last Name], [First Name] - Array and String Exercise.zip classroom code : 6rfue3e
Image segmentation is the process of logically dividing the image into multiple parts. The goal is to locate objects of interest and extract them from the rest of the image. More specifically, it assigns logical labels to each of the pixels such that pixels belonging to the same label share similar characteristics. There are many techniques to perform image segmentation some of which are theshold based, color based, and texture based. Your task is to implement an automatic threshold-based image segmentation algorithm called Otsu's Method.
The scenario of maternal and pediatric health in the Philippines has been an ongoing problem. This paper discusses a tool developed to expedite the process of diagnosing, informing the patient and prescribing the appropriate medication using novel techniques in dynamic programming. With the integration of a medical knowledge base, existing patient data and an inference engine, we are able to generate a case specific advise acting as a decision support system for medical practitioners. The goal to facilitate an improved delivery of diagnosis aiding physicians in giving timely and appropriate advice. A team of pediatric physicians piloted and tested this tool garnering an average usability rating of 9.2 over 10. KEYWORDS: Inference engine, dynamic programming, knapsack, e-health;
Traffic Congestion is a dire problem around the world, most especially in developing countries. As the number of vehicles are acquired by individuals each year, it is projected to increase in the years to come spiking traffic problems on the road. There have been efforts to research on sensing technologies to aid in traffic planning and management. While a success for developed countries, developing countries on the other hand are unable to make use of the traditional sensing technologies since most researches are geared towards using sensors for developed countries. With numerous data available for developed countries they are already one step ahead in creating smarter solutions for traffic using ICT strategies. This research will now discuss about methods that can be made use to cost effectively determine reliable traffic data sources fit to aid developing countries. Keywords: remote sensing, congestion dynamics
BACKGROUND Visualizing data help reveal interesting patterns from the data that might not be obvious in some representations. It also aids domain experts in extracting information, generating ideas, and formulating hypotheses from the data, which is why data visualization plays a huge role in the data analytics process. However, visualizing high dimensional data is challenging due to the human limitation of only being able to visualize up to three dimensions. Moreover, traditional techniques are also incapable of visualizing huge amounts of data due to their long processing time which increases exponentially as the number of data points increases. This poses a problem because the data being generated in the world is rapidly growing. In fact, data generated in the past decade is much larger than all the combined data collected in the past century . For now, no algorithm exists that tackles all the problems of handling big data, although there has been many works that address some specific aspects of it . Some existing ways for tackling high-dimensional data are through dimensionality reduction techniques. These would include Random Projections , Multidimensional Scaling (MDS) and Principal Components Analysis (PCA) . These algorithms significantly reduce the number of dimensions by mapping the high dimensional data into lower dimensions. This mapping inevitably lose information but these algorithms are creative in doing this in such a way that useful distances are preserved and information loss is minimized. For data visualization, the number of dimensions have to be reduced to at most three dimensions. The most commonly used dimensionality reduction techniques for visualizing high dimensional data are Self Organizing Maps (SOM) , Multidimensional Scaling (MDS) and Principal Components Analysis (PCA) . All three algorithms reduce dimensions based on certain properties such as local neighborhood relations for SOM, inter-point distances for MDS, and data variance for PCA. The only problem is that the time complexity of these algorithms are exponential. This is not suitable for handling big data. Parallelizable implementations of SOM , MDS and PCA exist but this only reduces the complexity by a linear factor, which may be good for now but will not scale well for even larger datasets. Clustering is a technique used in data mining that is useful in handling big data. It groups the data points together into clusters in such a way that the data points within a cluster are similar to each other, while the maximizing the dissimilarity across clusters. Clustering algorithms need to run in at most quasilinear time to be efficient for big data. There are many clustering algorithms that can do this such as BIRCH , FCM , DBSCAN , EM , and OPTICS to name a few. BFR (Bradley-Fayyad-Reina) and CLIQUE seems promising for the task of big data visualization. BFR (Bradley-Fayyad-Reina) algorithm is a variant of k-Means that can handle large data. The idea is if we assume the clusters to be normally distributed then we can summarize the clusters by keeping only the mean and standard deviations of the clusters and throwing away all the specific samples in the cluster. Effectively, this reduce the number of data points to be processed in the succeeding iterations. CLIQUE on the other hand is a subspace clustering algorithm, it looks for clusters in subsets of the dimensions and may be useful in reducing the number of dimensions and also in revealing patterns that may be hidden due to other dimensions that may not be relevant.
Stories are an essential part of knowledge and communication for humans. They are composed of a series of related concepts, such as events and states, which people use to share ideas to other members of society. Past researches have already tried to replicate the way humans produce or understand stories through creativetext generation systems. Unfortunately, there is a lack of data concerning relationships between events within and across sentences in a story because of lacking common sense knowledge. Therefore, a system called Eventure, which extracts instances of event relations within children’s stories, has been implemented. This system identifies concepts, as well as meta-data, in stories through the use of a thirdparty language processing tool that provides preprocessing capabilities like tokenization and POS tagging. With the concepts and meta-data collected, Eventure utilizes a predefined list of grammar templates and rules to extract instances of event relations and ultimately produces an ontology that stores them. The initial list grammar rules were collected from and were modified to accommodate meta-data of concepts. A new event relation between a causing state and a resulting event was also added. To validate the system’s accuracy, a gold standard of the extracted instances of event relations was created using ten children’s stories. The system yielded a precision of only 3.27%, a recall of 10.14%, and an F-measure of 4.95%. This is due to the relatively generic extraction templates, complexity of the children’s stories, and inherent problems with the utilized POS tagger. KEYWORDS: causal relation, relation extraction, knowledge representation, lexical semantics