Dissertations/Thesis

Clique aqui para acessar os arquivos diretamente da Biblioteca Digital de Teses e Dissertações da UFRN

2024
Dissertations
1
  • ALISON HEDIGLIRANES DA SILVA
  • An end-to-end framework to support moving objects in smart cities applications

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • NELIO ALESSANDRO AZEVEDO CACHO
  • GIBEON SOARES DE AQUINO JUNIOR
  • FREDERICO ARAUJO DA SILVA LOPES
  • LEOPOLDO MOTTA TEIXEIRA
  • Data: Apr 2, 2024


  • Show Abstract
  • The proliferation of intensive use of geolocation devices has significantly contributed to the considerable wear and tear on the battery life of these devices, resulting in difficulties in using certain applications. Faced with this issue, the present study introduces a comprehensive framework developed for the purpose of collecting, processing, and visualizing geolocation data on mobile devices, with a particular focus on smartphones. The system in question is composed of an Android library that enables the transmission of geolocation data while offering configuration options to enhance accuracy and reduce battery consumption. Additionally, a Java framework has been developed to receive and process this data, integrating with PostGIS extensions to ensure the acquisition of highly precise positions. Finally, a JavaScript library has been implemented to receive and display stored geolocations, providing a clear and intuitive understanding of underlying geographic patterns.

2
  • PAULO EUGÊNIO DA COSTA FILHO
  • Native Artificial Intelligence Deployment in IoSGT Systems: A Holistic Approach

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • DENIS LIMA DO ROSÁRIO
  • EDUARDO NOGUEIRA CUNHA
  • MARCIO EDUARDO KREUTZ
  • Data: Apr 30, 2024


  • Show Abstract
  • The growing energy demand sharpens the search for technological moder- nizations capable of meeting imminent needs, as well as increasing concerns about mitigating the environmental impacts that come with this escalation. The state of the art in Smart Grids refers to evidence of the use of AI techni- ques in IoSGT use cases, aiming to revolutionize the way energy is produced, transmitted, and consumed. In fact, AI is expected to offer unprecedented levels of disruption in the electric sector, through intelligent control methods that can unlock new value streams for consumers, while allowing support for a highly assertive, reliable, and resilient system. However, much research is still needed in this area, such as the positioning of AI-based instances along the edge-cloud continuum, types of techniques and algorithms for each use case, efficient use of predictive analytics capable of predicting future demands, detecting failures and anomalies in the power grid that allow for the adoption of proactive measures and improving network reliability, among many others.

    This research proposal aims to address some of the previously mentioned issues through a holistic architecture named IAIoSGT (Artificial Intelligence native in IoSGT). IAIoSGT is designed with the assumption of accelerating the use of AI techniques in an approach based on the continuous edge-cloud continuum. The assessment of the IAIoSGT architecture’s compliance, as well as its behavior and feasibility of use, was conducted on two distinct test benches, addressing both physical devices and Machine Learning algorithms. It is noteworthy that two comprehensive tests were carried out: the first one pertains to the classification and identification of electroelectronic devices connected in the same electrical network, involving Machine Learning algorithms such as KNN, SVM, MLP, NB, and DT. The second test focused on energy consumption

    xi

    prediction, utilizing the LSTM algorithm.

3
  • CHARLES HALLAN FERNANDES DOS SANTOS
  • Intelligent Disaster Recovery of 5G Operation, Management, and Control Systems of 5G Infrastructures

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • ROGER KREUTZ IMMICH
  • RAMON DOS REIS FONTES
  • FELIPE SAMPAIO DANTAS DA SILVA
  • Data: Apr 30, 2024


  • Show Abstract
  • The increase in the complexity of fifth-generation (5G) mobile networks, caused by the high number of mobile devices, along with the increased demands with respect to new application requirements, requires the employment of management systems capable to provision 5G networks always best connected and best served. With this in mind, network operators employ Operation, Management, and Control (OMC) centers, an ecosystem
    entailing different technologies and tools that interoperate to provide ultimate operations and management functions designed to keep the network alive with guaranteed levels agreed in SLA (Service Level Agreement) over time. An OMC is strategically constructed in a centralized environment facility and has the role of being able to respond to the occurrence of a "disaster", meaning events that potentially yield a certain level of network
    service unavailability. As a way to guarantee some level of fault tolerance, redundant OMC instances must be adopted, where a backup OMC must assume control when a principal OMC instance fails. A Disaster Recovery System (DRS) has the goal of assigning which OMC instance takes the principal control inside a 5G ecosystem. To this, the DRS constantly monitors the 5G ecosystem in pursuit of detecting disaster events so that assigning a backup OMC to assume the principal control.

    Based on the fact that DRS mainly operates in a reactive manner, meaning that the switching between OMCs is done after detecting a disaster occurrence, this master’s research devotes to exploiting Machine Learning techniques to efficiently control the assigning of multi-redundant OMCs in the control of 5G networks. To this end, the iDRS (Intelligent Disaster Recovery System) is introduced, which is based on predictive analyzes to act proactively in the attribution of OMCs control, in the hypothesis of provisioning an efficient and agile system to maintain 5G networks throughout their lifetime. 

     
4
  • FELIPE MORAIS DA SILVA
  • Gerenciando Tarefas Assincronas de Longa Execucao em Microsservicos Multilocatarios

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • NELIO ALESSANDRO AZEVEDO CACHO
  • CARLOS ANDRE GUIMARÃES FERRAZ
  • Data: May 16, 2024


  • Show Abstract
  • Uma arquitetura de microsservicos multilocatario envolvendo componentes
    com interacoes assincronas e trabalhos em lote requer estrategias eficientes
    para gerenciar cargas de trabalho assincronas. Este trabalho aborda essa
    questao no contexto de uma empresa lider no desenvolvimento de solucoes
    de software tributario usado por muitas empresas nacionais e multinacionais
    no Brasil. Um processo critico fornecido pelas solucoes baseadas em nuvem
    da empresa envolve a integracao tributaria, que inclui a coordenacao de
    tarefas complexas de calculo de impostos e precisa ser apoiada por operacoes
    assincronas usando um servico de mensageria para garantir a ordem correta.
    Essas operacoes podem ser independentes entre si, o que caracteriza o processo
    paralelo; ou podem ser dependentes entre si, o que caracteriza o processo
    First In First Out (FIFO). Os processos FIFO possuem restricoes adicionais
    em relacao aos paralelos. Por este motivo, especificamos e implementamos
    duas abordagens para gerenciar cargas de trabalho assincronas relacionadas
    a integracao tributaria dentro de uma arquitetura de microsservicos multi-
    tenant no contexto da empresa: (i) uma abordagem baseada em polling que
    emprega uma fila como um Distributed Lock (DL) e (ii) um abordagem
    baseada em push denominada Single Active Consumer (SAC) que depende
    da logica do agente de mensagens para entregar mensagens. Essas abordagens
    visam alcancar uma alocacao eficiente de recursos ao lidar com um numero
    crescente de replicas de conteineres e tenants. Esse trabalho tambem apresenta
    uma avaliacao do desempenho das abordagens DL e SAC para esclarecer como
    as cargas de trabalho assincronas impactam o gerenciamento de arquiteturas
    de microsservicos multi-tenant do ponto de vista de entrega e implantacao
5
  • FRANKLIN MATHEUS DA COSTA LIMA
  • Development and Evaluation of a Software to Support Experiments with Interactive Systems based on Brain-Computer Interface

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • JULIO CESAR DOS REIS
  • LEONARDO CUNHA DE MIRANDA
  • MONICA MAGALHAES PEREIRA
  • ROBERTO PEREIRA
  • Data: May 29, 2024
    Ata de defesa assinada:


  • Show Abstract
  • Brain-Computer Interfaces (BCIs) enable interaction with computers through the user’s brain activity. In recent years, the BCIs have been gaining prominence due to technological advancements. In the context of BCIs, electroencephalography (EEG) is a common method for scanning brain activity. Among the available EEG devices, we can highlight the NeuroSky MindWave. This work presents Software designed to support BCI researchers in their studies, specifically those which use the MindWave as EEG device. This work details the usage context of the Software and provides the necessary documentation to understand its functioning. The Software tools are described, and all functionalities are presented to showcase the possibilities that can be achieved using the developed Software. Lastly, the Software is evaluated through a pilot study. With the execution of the study, it was possible to verify the functionality of the Software in a BCI research scenario.

6
  • MARIA FERNANDA CABRAL RIBEIRO
  • Fault Tolerance in FPGA-based Multilayer Perceptron: Case Study SALVE TODAS
  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • MONICA MAGALHAES PEREIRA
  • ANNE MAGALY DE PAULA CANUTO
  • ALBA SANDYRA BEZERRA LOPES
  • FERNANDA GUSMÃO DE LIMA KASTENSMINDT
  • Data: May 31, 2024


  • Show Abstract
  • The concept of fault tolerance can be understood as the ability of a system to maintain its correct operation even after the occurrence of a failure. This area of study emerged in the 1950s, aimed at dealing with shortages in military and aerospace equipment operating in hostile and/or remote environments, and since then it has proven to be a prominent field of study, especially with the popularization of the use of computers and embedded systems.
    In this context, this work aims: the application of fault tolerance techniques in an Artificial Neural Network with Multilayer Perceptron (MLP) architecture embedded in an FPGA. The MLP network in question makes up a system aimed at women's safety that aims to identify, through the MLP network, possible risk situations for users. To this end, the system has sensors for vital signs, sudden movements and geolocation that provide information about the user's current situation. Since the MLP Network plays a critical role in identifying risk situations, it is necessary to apply techniques aimed at increasing its reliability, aiming at greater safety for the user. Therefore, this work analyzes the gains and impacts of applying four fault tolerance techniques combined in the embedded MLP. The techniques used include: dealing with the weights and biases of neurons in the network's processing layers; the removal of hidden neurons that are less sensitive to failure; the duplication of hidden neurons that are more sensitive to failures (a technique known as Augmentation); and the Triple Modular Redundancy of the neurons in the input and output layers of the network.

7
  • JOÃO MANUEL PIMENTEL SEABRA
  • CLUPIR: A Model for Classification of Visual Software Modeling Languages

  • Advisor : LYRENE FERNANDES DA SILVA
  • COMMITTEE MEMBERS :
  • LYRENE FERNANDES DA SILVA
  • LEONARDO CUNHA DE MIRANDA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • CAMILA DE ARAUJO
  • Data: Jun 7, 2024


  • Show Abstract
  • Software Visual Modeling Languages (SVMLs) play a crucial role in facilita- ting systems analysis and documentation processes, as well as communication between those involved. However, the large number of languages available makes it difficult for the software designer to select an appropriate SVML to model a given problem situation. This research created a classification model (CLUPIR) that aims to organize and catalog SVMLs based on a set of aspects. The existing classification models and their classification aspects were surveyed through a systematic mapping of the literature, which supported the choice of the classification aspects of the CLUPIR model. At the end of the research, to demonstrate the use of the model, eight SVMLs were classified. After that, we validated the usefulness of the model with 30 professionals from the software industry.

8
  • VANESSA DANTAS DE SOUTO COSTA
  • Microservices Architecture for Malaria Detection and Classification

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • DANIEL LÓPEZ CODINA
  • ITAMIR DE MORAIS BARROCA FILHO
  • Data: Jul 26, 2024


  • Show Abstract
  • Malaria affects millions of people each year, predominantly in resource-limited countries. According to the World Health Organization \parencite{WHO_2022}, there were an estimated 619,000 Malaria deaths globally in 2021 compared to 625,000 in 2020. Thus, automated classification of Malaria-infected blood smear images is a critical component in improving the efficiency and accuracy of Malaria diagnosis. With the aim of enabling a flexible solution that would allow the integration of neural networks to treat diseases on a large scale, we propose a methodology for Malaria disease classification. Our approach involves data collection, preprocessing and YOLO object detection (in real-time), encapsulated into a microservices environment, creating a modular and scalable system that efficiently handles inference requests while ensuring flexibility and maintainability (as shown in the load test experiments). Thus, we are bridging the gap between fighting Malaria and technology and making an application that could serve as a blueprint for future works in the disease classification field.

9
  • SARA GUIMARAES NEGREIROS
  • Guimarães Framework: Supporting the construction of documentation-guided automated tests for Arduino systems
  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • EIJI ADACHI MEDEIROS BARBOSA
  • FRANCISCO CARLOS GURGEL DA SILVA SEGUNDO
  • Data: Aug 1, 2024


  • Show Abstract
  • Arduino embedded systems are used in teaching activities and applied in various automation scenarios. In addition to the literature review in articles, this research conducted a survey to investigate the tools used in the documentation and testing activities of the development of these systems. As a result of the research, the Guimarães framework is proposed for teaching the development of embedded systems with Arduino, which includes the execution of automated tests at the component and system levels, supported by documentation. In an application scenario, the development process was carried out using the Guimarães framework, which includes the development of requirement diagrams, system documentation using statecharts, the definition of test cases with analysis of the statechart’s path tree as a stopping criterion, documentation, and electronic hardware prototyping, development of software behavior and test cases using the Guimarães framework, and execution and analysis of the test cases.

10
  • NICOLAS JACOBINO MARTINS
  • Vc-means: Comparative Analysis of a New Clustering Algorithm
  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • ANNE MAGALY DE PAULA CANUTO
  • HULIANE MEDEIROS DA SILVA
  • Data: Aug 9, 2024


  • Show Abstract
  • This study presents the development and evaluation of the "VC-Means" algorithm as an innovative approach to data clustering. VC-Means is based on a previously developed algorithm called "CK-Means" and is designed to identify patterns and specific clusters in data sets. Statistical tests were conducted on 20 traditional data sets, comparing and validating its efficiency against three well-known algorithms in the literature: K-Means, Fuzzy C-Means (FCM), and Gustafson-Kessel (GK). The evaluation was performed using validation indices such as the DB index, Silhouette, Adjusted Rand Index, Calinski-Harabasz, Adjusted Mutual Information, and V-measure. The results showed that VC-
    Means achieved great performance, with no significant statistical difference compared to the other algorithms, and demonstrated remarkable efficiency in terms of processing time.

Thesis
1
  • NICOLÁS EDUARDO ZUMELZU CÁRCAMO
  • Fundamentals of a Fuzzy Mathematical Analysis Based on Fuzzy Numbers and Admissible Orders
  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • REGIVAN HUGO NUNES SANTIAGO
  • ROBERTO ANTONIO DÍAZ MATAMALA
  • GRAÇALIZ PEREIRA DIMURO
  • JOSÉ EDMUNDO MANSILLA VILLARROEL
  • RUI EDUARDO BRASILEIRO PAIVA
  • Data: Feb 26, 2024


  • Show Abstract
  • The notion of admissible orders in interval fuzzy logic emerged in 2010 with the aim of providing a minimum criterion that a total order in the set of closed subintervals of the unitary interval [0,1] should meet to be used in applications of this fuzzy theory. Later, this same idea was adapted to other extensions of fuzzy logic. In this thesis, we take the idea of admissible orders outside the context of extensions of fuzzy logic. In fact, here we introduce the notion of admissible order for fuzzy numbers equipped with a partial order, that is, a total order that refines this partial order. We pay special attention to the partial order proposed by Ramik and Rimanek in 1985. Furthermore, we present a method to construct admissible orders over fuzzy numbers from admissible orders defined for intervals, considering a superiorly dense sequence, and we prove that this order is admissible for the order of Ramik and Rimanek. From these admissible orders we study fundamental concepts of Mathematical Analysis in the context of fuzzy numbers. The objective is to take the first steps towards the development of a mathematical analysis of fuzzy numbers in certain admissible orders in a robust and well-founded way, preserving as much as possible properties of traditional mathematical analysis. In this way, we introduce the notion of Riemann integral over fuzzy numbers, called fuzzy Riemann integral, considering admissible orders, and we study properties and characterizations of this integral. We formalize the concepts of vector space without inverses and ordered vector space without inverses, a type of hyperstructures, which generalizes the conventional notion of ordered vector spaces. It is worth noting that the space of triangular fuzzy numbers (TFN) and TFNs with some orders are examples of both hyperstructures. Furthermore, we introduce the notion of increasing functions of average type over fuzzy numbers equipped with admissible orders in general, characterizing them as idempotent, and in particular, in the ordered vector space without inverses. Finally, we introduce the concept of weighted vector-fuzzy graphs and use tools built from average-like functions in the ordered vector space without inverses, to solve types of shortest path problems in weighted graphs.

2
  • HELOÍSA FRAZÃO DA SILVA SANTIAGO
  • Interval-Valued Copulas

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ANTONIA JOCIVANIA PINHEIRO
  • GRAÇALIZ PEREIRA DIMURO
  • HELIDA SALLES SANTOS
  • REGIVAN HUGO NUNES SANTIAGO
  • Data: Mar 12, 2024


  • Show Abstract
  • Copulas are functions that play an important role in probability theory. Since interval
    probability takes into account the imprecision in the probability of some events, it is
    likely that interval copulas have a relevant contribution to interval probability theory. This
    article aims to introduce the definition and analysis of interval-valued copulas and their
    properties. We pro- vide a condition for an interval-valued copula to be 1-Lipschitz and
    from the interval-valued automorphisms we obtain the conjugate interval-valued copula
    and some important inherited properties. We have seen that the Archimedean intervalvalued
    copula, in most cases, has its behavior defined by its generative function. We also
    show a condition for this generating function to generate an interval-valued copula and a
    version of the Sklar’s theorem for representable interval-valued copulas.

3
  • ALAN DE OLIVEIRA SANTANA
  • Using Mutation Analysis to Identify Erros in Mathematical Problem Solving

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROBERTA DE SOUZA COELHO
  • KLEBER TAVARES FERNANDES
  • THIAGO REIS DA SILVA
  • Data: Mar 25, 2024


  • Show Abstract
  • Learning mathematics can be a significant challenge for students around the world. The difficulties encountered by these students vary, such as lack of attention, methodological problems, lack of mastery of prerequisite knowledge, reading difficulties, personal issues, among others. Despite being complex, these difficulties often manifest themselves in specific errors in solving mathematical problems, allowing experts to identify them and associate them with probable causes. In this context, common errors stand out, such as operator substitutions, rounding errors, incorrect results of operations, among others. These errors can be mapped and generalized since they are integral parts of students' solutions. Thus, intelligent systems, such as Intelligent Tutoring Systems (ITS), can be developed to address these difficulties by identifying errors and providing feedback to teachers and students themselves. Based on the above, this work aims to propose a model for generalizing common errors to be applied in the step-by-step identification of error origins. To achieve this, the model will utilize the concept of mutants to generate distractors that will serve as parameters for identifying the source of the problems. To gather relevant data for this study, some research efforts have focused on assessing the state of the art of ITS applied to mathematics in the Brazilian and international scenarios. Furthermore, exploratory studies have been conducted to identify common errors that can be mapped for the generation of the mutation model. Next, the presentation of the mutant modeling is carried out, along with the description of the architecture of the ITS for mathematics and studies that seek to validate it. The main research hypotheses indicate that the use of mutant modeling applied to mathematics through an ITS allows for greater dynamism in creating error scenarios, and they can be associated with problems that go beyond the analysis of the test. Another hypothesis is that feedback based on distractors generated by the mutation model, combined with step-by-step analysis of students' responses, allows for more detailed identification of the error location, facilitating the generation of feedback from the ITS.

4
  • JADSON JOSE DOS SANTOS
  • A Deep Dive into Continuous Integration Monitoring Practices

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ITAMIR DE MORAIS BARROCA FILHO
  • DANIEL ALENCAR DA COSTA
  • GUSTAVO HENRIQUE LIMA PINTO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Apr 25, 2024


  • Show Abstract
  • Uma das principais atividades no desenvolvimento de software é o monitoramento, que desempenha um papel vital na verificação da implementação adequada de processos, identificação de erros e descoberta de oportunidades de melhoria. A Integração Contínua (CI) abrange um conjunto de práticas amplamente adotadas que aprimoram o desenvolvimento de software. No entanto, há indicações de que os desenvolvedores podem não monitorar adequadamente todas as práticas de CI. Nesta tese, mergulhamos profundamente no oceano do monitoramento das práticas de CI. Nosso objetivo é descobrir como esse monitoramento é conduzido, demonstrar as vantagens do monitoramento das práticas de CI e destacar os desafios que precisam ser superados. 

    Em nosso primeiro estudo, analisamos o impacto de práticas específicas de CI no volume de Pull Requests e Issues relacionadas a bugs. Nossos resultados revelaram uma correlação positiva entre as práticas de CI e o aumento no número de merged pull requests. Nós também identificamos uma correlação significativa com o número de Issues relacionadas a bugs. Adicionalmente, nossos resultados sugerem que valores mais elevados de práticas de CI podem indicar uma melhor qualidade no processo de desenvolvimento.

    Posteriormente, em nosso segundo estudo, investigamos a importância atribuída a essas práticas pelos desenvolvedores e o suporte ao monitoramento fornecido pelas ferramentas de CI mais populares. Descobrimos que geralmente os desenvolvedores monitoram apenas a cobertura e os metadados básicos da build (por exemplo, duração e status da build). Os desenvolvedores expressaram interesse em monitorar práticas de CI se tivessem oportunidade. Além disso, identificamos que vários dos serviços líderes de CI ainda possuem um suporte inicial para monitorar as práticas de CI.

    Por fim, avaliamos o monitoramento em cenários reais, realizando um estudo de caso em três projetos de três organizações diferentes, no qual podemos verificar mais profundamente o interesse dos desenvolvedores pelo monitoramento de práticas de CI, seus benefícios, desafios e a evolução das práticas de CI durante um período de dois meses. O estudo de caso revelou que o monitoramento das práticas de CI oferece vários benefícios ao projeto e é pouco custoso de ser aplicado. Os participantes demonstraram um forte desejo de integrar dashboards de monitoramento de CI nos serviços de CI mais populares.

5
  • THIAGO SOARES MARQUES
  • Model-and-indicator-based GRASP-VNS for Two Problems concerning Intensity Modulated Radiotherapy Planning

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • HUDSON GEOVANE DE MEDEIROS
  • MATHEUS DA SILVA MENEZES
  • PAULO HENRIQUE ASCONAVIETA DA SILVA
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Apr 26, 2024


  • Show Abstract
  • Intensity-modulated radiotherapy (IMRT) is a widely used cancer treatment. Planning this type of treatment involves two complex computational problems related to the choice of beam angles to irradiate the patient and the intensity that each beam must have so that cancer cells are killed, and at the same time avoid reaching regions with healthy tissue. Metaheuristics have been widely used to address complex problems. Hybridization of metaheuristics often results in methods that are even more effective than metaheuristics used alone. In the context of hybridization, there are also matheuristics, which are unions of metaheuristics with mathematical programming. In this context, the research reported in this work has been added. An algorithm is proposed that hybridizes the GRASP (Greedy Random Adaptive Search Procedure) and VNS (Variable Neighborhood Search) meta-heuristics with mathematical programming models to address the two problems mentioned above. A third approach based on automaton learning, called GRASP-VNS-IA, was also explored to determine the execution order of VNS neighborhoods. Of the four models used, two were proposed in this study. The solutions produced by the algorithm are evaluated using an indicator that combines four indicators, three of which are proposed in this study. GRASP-VNS was compared with GRASP and GRASP-VNS-IA. The algorithms were tested on a set of ten liver cancer instances that are known to be challenging. The results produced by the algorithms were evaluated using quality indicators and histograms. Statistical tests were used to support the conclusions regarding the behavior of the algorithms.

6
  • SIDEMAR FIDELES CEZARIO
  • Application of the OWA Operator with metaheuristic in the Beam Angle Optimization and Intensity Problems in IMRT

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • ISLAME FELIPE DA COSTA FERNANDES
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Apr 26, 2024


  • Show Abstract
  • The use of radiotherapy for cancer treatment is essential for combating this disease. The challenge is to achieve the minimum dose prescribed for the tumor while avoiding exposure of healthy organs to radiation levels higher than the permitted limits. One of the main therapeutic approaches in this field is intensity-modulated teletherapy (IMRT). This study aimed to optimize the Beam Angle Optimization Problem and Fluency Map Optimization using metaheuristics. Three algorithms are presented: genetic algorithm, OWA-OMF memetic and multi-model memetic. All include the Ordered weighted averaging (OWA) operator. The multi-model memetic uses different OWA functions to determine the best fluence map for a solution. The algorithms were compared using a new quality indicator composed of the two new indices proposed in this study. Statistical tests were conducted to compare the effectiveness of these algorithms, revealing the superiority of the multi-model memetic algorithm over the others. With these algorithms, it was possible to find clinically viable solutions for most instances.

7
  • JÚLIA MADALENA MIRANDA CAMPOS
  • Home Health Care Routing and Scheduling Problem with Service Prioritization

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • GUSTAVO DE ARAUJO SABRY
  • MATHEUS DA SILVA MENEZES
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Apr 29, 2024


  • Show Abstract
  • The home care service is a type of health care that comprises a set of prevention, rehabilitation, and treatment actions for illnesses provided at home. With the emergence of COVID-19, home care became even more present, replacing or complementing hospital admission, offering a more humanized type of care for people with a stable clinical condition who require medical care. Scheduling and routing health professionals who provide such services has some challenges, including serving patients within the health professional's working hours, having an appropriately sized team of professionals, ensuring patient and professional satisfaction, saving costs on the fleet of vehicles that transport professionals, etc. This work presents a new variant of the problem, dividing patients into priority and optional. Priority patients must be treated within the defined planning horizon. It is desirable that, as far as possible, optional customers are also served. The objective is to maximize the revenue received from services minus the transport costs of professionals. This work presents an Integer Linear Programming model for the problem. The model is implemented and tested on a set of instances also proposed in this work. In the variant discussed here, each professional is transported by a vehicle. This work also presents a comprehensive review of the literature on the Health Professional Routing and Scheduling Problem, in which several mathematical models found in the literature were implemented and compared to evaluate their efficiency and applicability in the context of the problem studied.

8
  • RAMIRO DE VASCONCELOS DOS SANTOS JUNIOR
  • Using Machine Learning to Classify Criminal Macrocauses in Smart City Contexts

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • ARAKEN DE MEDEIROS SANTOS
  • BRUNO MOTTA DE CARVALHO
  • DANIEL SABINO AMORIM DE ARAUJO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS GAUDENCIO DO REGO
  • Data: May 2, 2024


  • Show Abstract
  • Our research presents a new approach to classifying macrocauses of crime, specifically
    focusing on predicting and classifying the characteristics of ILVCs. Using a dataset
    from Natal, Brazil, we experimented with five machine learning algorithms, namely
    Decision Trees, Logistic Regression, Random Forest, SVC, and XGBoost. Our methodology
    combines feature engineering, FAMD for dimensionality reduction, and SMOTE-NC for
    data balancing. We achieved an average accuracy of 0.962, with a standard deviation of
    0.016, an F1-Score of 0.961, with a standard deviation of 0.016, and an AUC ROC curve of
    0.995, with a standard deviation of 0.004, using XGBoost. We validated our model using
    the abovementioned metrics, corroborating their significance using the ANOVA statistical
    method. Our work aligns with smart city initiatives, aiming to increase public safety and
    the quality of urban life. The integration of predictive analysis technologies in a smart
    city context provides an agile solution for analyzing macrocauses of crime, potentially
    influencing the decision-making of crime analysts and the development of effective public
    security policies. Our study contributes significantly to the field of machine learning applied
    to crime analysis, demonstrating the potential of these techniques in promoting safer urban
    environments. We also used the Design Science methodology, which includes a consistent
    literature review, design iterations based on feedback from crime analysts, and a case
    study, effectively validating our model. Applying the classification model in a smart city
    context can optimize resource allocation and improve citizens’ quality of life through a
    robust solution based on theory and data, offering valuable information for public safety
    professionals.

9
  • VERNER RAFAEL FERREIRA
  • FIberNet: A simple and efficient convolutional neural network architecture

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • BRUNO MOTTA DE CARVALHO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • JOAO CARLOS XAVIER JUNIOR
  • Data: Aug 28, 2024


  • Show Abstract
  • With the ongoing increase in data generation each year, a wide array of technologies has emerged with the aim of transforming this information into valuable insights. However, often the financial and computational costs associated with the software used in this process render them inaccessible to the majority of individuals. A notable example is the requirement for specific hardware, such as Graphics Processing Unit (GPU) and Tensor Processing Unit (TPU), which are highly advanced in technological terms but are also notably expensive.

    These challenges of accessibility are reflected in various areas of computing, including the application of Convolutional Neural Networks (CNNs), which play a crucial role in the field of computer vision. CNNs are highly effective in extracting meaningful information from images and identifying objects. However, the high cost associated with cutting-edge resources like GPUs and TPUs can limit the widespread adoption of these powerful networks. This makes it imperative to explore alternatives that allow for the construction and deployment of effective models with more accessible resources, without compromising the quality of results.

    In this context, we present our research, in which we have developed an algorithm that sets itself apart from other existing models in terms of size, number of trainable parameters, and inference speed. Despite its compactness, the algorithm maintains high accuracy and the ability to process large volumes of data.

    The proposed architecture, named FiberNet in reference to the sisal plant, is a small and straightforward CNN. The primary objective is to offer a financially viable low-cost model for classifying Agave Sisalana images and its fibers. FiberNet features a reduced number of trainable parameters, resulting in high inference speed. To achieve this, we employ a specialized layer that reduces the dimension of input data before the convolution layers.

    The main goal of this research is to reduce computational costs without compromising algorithm performance. To assess the viability of the proposed method, we conducted an empirical analysis where our model achieved an accuracy of 96.25% on the Sisal dataset and 74.9% on the CIFAR10 dataset, using only 754,345 trainable parameters. Furthermore, we applied the proposed method to a widely recognized image dataset, obtaining promising results. These outcomes reinforce the effectiveness and applicability of our model in practice.



10
  • LUANA TALITA MATEUS DE SOUZA

  • Integrating the General Data Protection Law into Software Development: A New Requirements Specification Model applied to Digital Health

     

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • APUENA VIEIRA GOMES
  • JOSUÉ VITOR DE MEDEIROS JÚNIOR
  • CARLA TACIANA LIMA LOURENCO SILVA SCHUENEMANN
  • MARILIA ARANHA FREIRE
  • Data: Aug 29, 2024


  • Show Abstract
  • This research addresses the problem of insufficient or inefficient requirements specification to meet informational needs in software development in compliance with the regulations and laws applicable to Digital Health systems. In particular, it highlights the importance of ensuring that legal requirements, such as those stipulated by the General Data Protection Law (GDPL), are integrated and implemented in the development process. The overall objective of this work is to develop a requirements specification artifact that complies with the GDPL in the context of agile development of digital health solutions. This artifact aims to establish a Requirements Engineering strategy that ensures legal compliance, facilitating its practical application in software development. In addition, the artifact should have an educational character, being useful for students and teachers, allowing a practical integration of the concepts of privacy and data protection in both the development and teaching processes.  The methodology of this research was divided into three parts: (1) the phase of preparing the extension proposal; (2) the phase of preparing the evaluation of the extension proposal; and (3) the phase of applying the evaluation of the proposal. In evaluating the proposal, an evaluation tool was developed for two profiles of participants (information technology professionals and teachers). We obtained a total of 24 responses to the forms. The results show that the adoption of the proposal to extend the requirements can be favorable, as it has the potential to help identify and effectively communicate the requirements appropriate to the GDPL to agile development teams, facilitating compliance with legal requirements and data protection. In addition, an important educational potential was identified in the proposal, as the participating teachers assessed that it can be used as a suitable tool for teaching about requirements and the GDPL in the classroom.


2023
Dissertations
1
  • KEVIN BARROS COSTA
  • Self-Organized Monitoring Plan at Cloud-Network Slice Granularity

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • FABIO LUCIANO VERDI
  • AUGUSTO JOSE VENANCIO NETO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • Data: Jan 27, 2023


  • Show Abstract
  • The main goal of this master's research is to propose a new monitoring plan for the NECOS ecosystem, denoted as DIMA (Distributed Infrastructure & Monitoring Abstraction). The DIMA proposal aims to afford monitoring as a service inside NECOS domains at Cloud-Network Slice part granularity, based on the motivation evidenced by the analysis of related works. Currently, NECOS relies on the IMA (Infrastructure & Monitoring Abstraction) solution, which offers monitoring as a service from a centralized approach running at the core cloud premise. DIMA intends to advance the IMA solution by providing monitoring as a service for the NECOS ecosystem with the following improvements: full automation of monitoring service orchestration, support of different monitoring models (centralized, distributed, and hybrid), and harnessing the continuous edge-to-cloud.

2
  • RODRIGO LAFAYETTE DA SILVA
  • Utilizando Aprendizado de Máquina na Identificação de Null Pointer Exceptions em Análise Estática de Código em Java

  • Advisor : EVERTON RANIELLY DE SOUSA CAVALCANTE
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • MARJORY CRISTIANY DA COSTA ABREU
  • DANIEL SABINO AMORIM DE ARAUJO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Jan 30, 2023


  • Show Abstract
  • Por uma questão de flexibilidade, as linguagens de programação orientadas a objetos convencionais admitem valores nulos para referências. Na linguagem de programação Java, o uso de uma referência de objeto com um valor nulo causa o lançamento de uma exceção do tipo Null Pointer Exception (NPE), uma das causas mais frequentes de falhas em aplicações escritas nessa linguagem. A análise estática tem sido utilizada para inspecionar artefatos de software como código fonte ou código binário visando localizar a origem de faltas sem que seja necessário executar o programa de forma orientada a depuração. Apesar de sua eficácia, a análise estática baseia-se em um conjunto fixo e estático de regras que descrevem padrões de ocorrência de faltas e é conhecida por um número significativo de falsos positivos. Este estudo investiga como o uso de técnicas de Aprendizado de Máquina (AM) pode melhorar a precisão da detecção de faltas relacionadas a NPE por meio de análise estática, uma linha ainda inexplorada na literatura e na indústria de software. O objetivo principal é propor, implementar e avaliar uma abordagem baseada em classificação que enderece o problema da detecção de faltas relacionadas a NPE em código Java. As contribuições deste trabalho são: (i) uma análise de como as técnicas de AM podem ser usadas para detectar essas faltas por meio de análise estática e (ii) uma avaliação do desempenho das técnicas de AM em comparação às ferramentas tradicionais de análise estática.

3
  • JOSÉ RENATO DE ARAÚJO SOUTO
  • Reconstructing Three-Dimensional Wounds Using Point Descriptors: A comparative Study

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • ADRIANA TAKAHASHI
  • BRUNO MOTTA DE CARVALHO
  • LEONARDO CESAR TEONACIO BEZERRA
  • Data: Jan 31, 2023


  • Show Abstract
  • Ulcer is the generic name given to any lesion in the skin tissue or
    mucous. These lesions culminate in the rupture of the epithelium, resulting in the
    exposure of deeper tissues. The complete problem to be solved
    by the project in which this work is inserted refers to the development of
    accurate and efficient computational tools aimed at monitoring
    of treating chronic wounds. This fundamental follow-up
    importance for the determination of the evolution picture in the treatment of
    patient. Thus, in this work, a quantitative evaluation of the
    three-dimensional reconstructions obtained using Structure from Motion with the
    aid of 6 different point descriptors. The specific problem tackled is the
    to determine which point descriptor(s) are most efficient and accurate
    for the three-dimensional reconstruction of chronic wounds, having been chosen
    the descriptors SIFT, SURF, ORB, BRIEF, FREAK and DRINK. The results
    achieved assume that the measurement of chronic wound areas can be
    obtained through the use of a smartphone through the methodology
    addressed. Regarding processing time, descriptors based on
    floating points, SIFT and SURF were the ones with the highest cost
    computation. In calculating the area of wound surfaces, the descriptors
    obtained average errors of 2.61\% when using the SIFT, 3.36\% for the
    SURF, 10.03\% for the BRIEF, 6.33\% for the ORB, 6.27\% for the FREAK and 3.74\% for the
    DRINK, when using a configuration with 8 images.

4
  • NATÁSSIA RAFAELLE MEDEIROS SIQUEIRA
  • The use of machine learning to classify electricity consumption profiles in different regions of Brazil

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • Data: Feb 24, 2023


  • Show Abstract
  • Accurate forecasting of energy consumption can significantly contribute to improving distribution management and 
    potentially contribute to controlling and reducing energy consumption rates. Advances in data-based computational
     techniques are becoming increasingly robust and popular as they achieve good accuracy in results. This study 
    proposes the development of a model capable of classifying energy consumption profiles in the residential sector, 
    using machine learning and transfer learning techniques. The application of Machine Learning (MA) techniques in 
    energy production can indicate great potential for controlling and managing the production and distribution of 
    electric energy, which can bring greater efficiency, improve production and optimize distribution. In this study, we 
    combine AM techniques with the transfer of learning that is able to use pre-established knowledge in new contexts 
    (knowledge bases), making the energy forecasting process more efficient and robust.

5
  • PAULO ENEAS ROLIM BEZERRA
  • CSP Specification and Verification of Relay-based Railway Interlocking Systems

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • AUGUSTO CEZAR ALVES SAMPAIO
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MARTIN ALEJANDRO MUSICANTE
  • Data: Mar 21, 2023


  • Show Abstract
  • Railway Interlocking Systems (RIS) have long been implemented as relay-based systems. However, checking these systems for safety is usually done manually from an analysis of electrical circuit diagrams, which cannot be considered trustful. In the literature, formal verification approaches are used in order to analyse such systems. However, this type of verification tends to consume a lot of computational resources, which hinders the use of these verification for industrial systems that makes the verification of more complex electrical circuits untrustworthy. Although formal proof of the behaviour of these systems is effective in order to improve safety, in the existing literature, the works generally focus on modelling the system state transitions, ignoring the components independent concurrent behaviours. As a consequence, it is not possible to verify the existence of concurrency problems. Differently from other approaches, the methodology proposed in this work allows the specification of transient states. As a result, it is possible to perform a stronger verification, including an investigation about the existence of state succession cycles (i.e. ringbell effect), which are dangerous in such systems. A formal analysis of the system has the potential to guarantee its safety. This work presents a proposal for a formal specification model of the states of the electrical components of relay-based RIS using a process-based language, CSP. This model enables the verification of such systems based on each component behaviour, which allows the analysis of properties like existence of a state with an infinity succession cycle (i.e. ringbell effect), short-circuits, deadlocks or divergences, by simplifying the analysis and logical verification of the system based on the preconditions of the component states. Furthermore, the proposed model allows the automation of the formal verification of the system by model-checking, focusing on the concurrency aspects of such systems and supporting the analysis of new safety conditions that were not considered on previous approaches.

6
  • TIAGO VINÍCIUS REMÍGIO DA COSTA
  • Uma Arquitetura de Software de Referência para Sistemas Modernos de Big Data

  • Advisor : EVERTON RANIELLY DE SOUSA CAVALCANTE
  • COMMITTEE MEMBERS :
  • ELISA YUMI NAKAGAWA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • LUCAS BUENO RUAS DE OLIVEIRA
  • THAIS VASCONCELOS BATISTA
  • Data: Jul 25, 2023


  • Show Abstract
  • Big Data é um termo genérico que geralmente se refere a conjuntos de dados cujo tamanho cresce para além da capacidade dos métodos e ferramentas tradicionais de coletar, armazenar, processar e analisar dados em um tempo tolerável e utilizando recursos computacionais de forma razoável. Sistemas de Big Data (SBD) podem ser encontrados em diversas áreas, provendo insights e informações úteis a organizações e usuários. A complexidade e as características intrínsecas a esses sistemas requerem arquiteturas de software para satisfazer adequadamente requisitos funcionais e de qualidade. Arquiteturas de referência (ARs) são consideradas um ativo importante na construção de arquiteturas de software uma vez que elas promovem reuso de conhecimento e orientam seu desenvolvimento, padronização e evolução. Entretanto, muitas arquiteturas de referência para BDS ainda são produzidas utilizando uma abordagem ad-hoc sem seguir um processo sistematizado para seu projeto e avaliação. Este trabalho propõe a Modern Data Reference Architecture (MoDaRA), uma AR para SBD fundamentada em um processo sistemático que agrega prática da indústria e conhecimento acadêmico nesse domínio. O projeto da MoDaRA seguiu o ProSA-RA, um processo bem definido para guiar a definição de ARs, compreendendo fases como análise, síntese e avaliação arquitetural estruturadas sobre fontes de informação selecionadas. A MoDaRA foi avaliada considerando dois casos de uso da indústria e um checklist para avaliação de ARs adaptado a SBD.

7
  • ADELINO AFONSO FERNANDES AVELINO
  • SDNoC 42: Shortest Paths-Based SDNoC Model

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • ALISSON VASCONCELOS DE BRITO
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • Data: Sep 29, 2023


  • Show Abstract
  • In this work, we developed a new network-on-chip architecture using software-
    defined networks; this architecture proved to be robust and capable of improving
    routing in a network-on-chip. The implementation consists of a software-defined
    network-on-chip architectural model, exploring the parallelism of control me-
    chanisms using Dijkstra’s algorithm to find the best path in packet routing
    between switches. The approach proposes a significant improvement in com-
    munication latency by reducing the waiting time of packets in the controllers’
    queue and exploring the network’s topological potential through the OpenFlow
    protocol. The results obtained are promising. Using the Dijkstra algorithm and
    increasing the number of cores makes optimizing communication latency in
    100% of cases possible compared to the XY algorithm.

     
8
  • CLODOMIR SILVA LIMA NETO
  • Algebraization in quasi-Nelson logics

  • Advisor : UMBERTO RIVIECCIO
  • COMMITTEE MEMBERS :
  • JOAO MARCOS DE ALMEIDA
  • REGIVAN HUGO NUNES SANTIAGO
  • UMBERTO RIVIECCIO
  • RODOLFO ERTOLA BIRABEN
  • Data: Oct 31, 2023


  • Show Abstract
  • Quasi-Nelson logic is a recently introduced generalization of Nelson's cons\-tructive logic with strong negation to a non-involutive setting. The present work proposes to study the logic of quasi-Nelson pocrims ($\mathbf{L}_{\mathrm{QNP}}$) and the logic of quasi-N4-lattices ($\mathbf{L}_{\mathrm{QN4}}$). This is done by means of an axiomatization via a finite Hilbert-style calculus. The principal question which we will address is whether the algebraic counterpart of a given fragment of quasi-Nelson logic (or class of quasi-N4-lattices) can be axiomatized abstractly by means of identities or quasi-identities. Our main mathematical tool in this investigation will the twist-algebra representation. Coming to the question of algebraiza\-bility, we recall that quasi-Nelson logic (as extensions of $\mathbf{FL_{ew}}$) is obviously algebraizable in the sense of Blok and Pigozzi. Furthermore, we showed the algebraizability of $\mathbf{L}_{\mathrm{QNP}}$ and $\mathbf{L}_{\mathrm{QN4}}$, which is BP-algebraizable with the set of defining identity $E(\alpha) := \{ \alpha \approx \alpha \to \alpha \}$ and the set of equivalence formula $\Delta(\alpha, \beta) := \{ \alpha \to \beta, \beta \to \alpha, \nnot \alpha \to \nnot \beta, \nnot \beta \to \nnot \alpha \}$. In this document, we register the achieved results up to the present moment and indicate a plan for the developments to be included in the final version of this thesis.

9
  • JAIRO RODRIGO SOARES CARNEIRO
  • A Virtual Teaching Programming Assistant to Support Mastery Learning

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • THIAGO REIS DA SILVA
  • Data: Nov 6, 2023


  • Show Abstract
  • [Context] The high rates of student retention and dropout in IT courses and related areas are
    still barriers to be overcome, especially when they are related to certain subjects and/or
    programme content in their curricula, such as computer programming. In this way, attention
    has been focussed on technological training courses, especially in the search for solutions that
    will enable educational institutions to deal with the challenges of this issue. [Problem] It
    turns out that teaching and learning programming in higher education classes is challenging.
    From a teaching perspective, issues related to the daily challenges of dealing with class time,
    rigid curricula, student demotivation, large and heterogeneous classes, among other things,
    make it impossible to provide more individualised support for students or end up resulting in
    an overload of activities for the teacher. This overload can jeopardise teaching not only in
    terms of assisting students, but also in choosing and implementing pedagogical models that go
    against the traditional teaching model, such as Mastery Learning. This educational theory
    corresponds to a pedagogical approach proposed by Benjamin Bloom, who predicts that all
    students in a class can progressively reach the same level of understanding of the content
    (mastery) when provided with the necessary conditions. However, the cost of implementing
    this approach can be very high for teachers, especially when it is not subsidised by technology.
    [Proposal] In this sense, as a way of supporting introductory programming courses, this study
    describes a virtual programming assistant that integrates a set of functionalities that can
    favour the adoption of Mastery Learning in programming classes as it contributes to students'
    learning aspects through automated actions. [Objective] This assistant aims to assist teachers
    in promoting continuous and customised feedback. Therefore, the main objective of this study
    is to investigate how a Virtual Programming Assistant, designed with functionalities that
    enable the use of Mastery Learning, can support the work of teachers with their respective
    students in introductory programming courses mediated by educational platforms for
    teaching and learning programming online. [Methodology] As a starting point for achieving
    this objective, a systematic mapping of the literature was carried out, which brought together
    40 primary studies dealing with the use of Mastery Learning in the areas of interest. Two
    studies were then planned and carried out with around 300 new students on an Information
    Technology degree course and their respective teachers (five). The first was an exploratory
    study carried out to better investigate the problem and build the virtual assistant proposal.
    The second was a case study aimed at validating the defined proposal. [Results] The results
    show that the virtual assistant, in addition to benefiting teachers in correcting the proposed
    programming exercises and giving feedback to students, where over 9,000 (nine thousand)
    feedbacks were given throughout the course, can favour the adherence of the Mastery
    Learning pedagogical model by teachers in introductory programming classes.

10
  • GLAUBER MENDES DA SILVA BARROS
  • Energy-Driven Raft: An energy-driven consensus algorithm

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • FLAVIA COIMBRA DELICATO
  • GIBEON SOARES DE AQUINO JUNIOR
  • NELIO ALESSANDRO AZEVEDO CACHO
  • Data: Dec 18, 2023


  • Show Abstract
  • In recent years, consensus algorithms have become a fundamental part of fault-tolerant distributed systems. For many years, the widely disseminated consensus algorithm that served as the basis among the various applications was Paxos. Several other consensus algorithms and variations tolerated in Paxos have emerged in recent years, one of which is the Raft consensus algorithm. Raft came up with the initiative to make the understanding of the Paxos consensus mechanism simpler and more intuitive, both in the educational area and in practical implementations. In this way, it has become one of the most used algorithms in real implementations. A trend that computational systems have followed in recent years is the exchange of the use of batteries as an electrical power source for an approach that aims to collect energy from the environment for the computational operation of the device (Energy Harvester). This trend is mainly due to issues related to the environment, which suffer from the large amount of batteries coming from these types of devices and which will be discarded over the next few years. However, this approach of dealing through the collection of energy from the environment has some challenges that must be overcome, the main one being dealing with the dynamicity of the process of collecting energy from the environment in comparison with a system powered by an unlimited energy source by a certain period, which is the battery. To deal with this dynamicity of energy collection from the environment, an adopted measure is energy-driven computing, these are systems that are architecturally designed around the instability of energy collection from environments, so these systems seek adjust its operation according to the amount of energy being collected at that particular moment, aiming at reducing energy consumption in a period of savings in collection.

    With this, this master's research aims to propose changes in the Raft consensus algorithm to satisfy its use in systems that collect energy from the environment, based on the concepts of energy-driven computing (Energy-Driven Computing). For this, it is necessary to bring a certain level of knowledge of the energy situation to the nodes (nodes) that are operating the Raft in a cluster, and through this knowledge of the current situation, each one satisfies its operations, observing the consumption of energy consumption , but without compromising the operation of the cluster as a whole.

Thesis
1
  • SAMUEL DA SILVA OLIVEIRA
  • Parallelized Superiorization Method for History Matching Problems Using Seismic Priors and Smoothness in Parts

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • EDGAR GARDUNO ANGELES
  • ISLAME FELIPE DA COSTA FERNANDES
  • MARCIO EDUARDO KREUTZ
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 13, 2023


  • Show Abstract
  • History Matching is a very important process used in managing oil and gas
    production since it aims to adjust a reservoir model until it closely reproduces
    the past behavior of a actual reservoir, so it can be used to predict future
    production. This work proposes to use an iterative method with constraints
    to optimize a reservoir production model, called superiorization to solve this
    problem. The superiorization method is an approach that uses two optimization
    criteria, the first being the production result and the second being the smooth-
    ness by parts of the reservoir, where this second criterion seeks to optimize its
    function without negatively affecting the optimization of the first criterion. A
    genetic algorithm was chosen as a comparative approach to the tabu search
    algorithm using the superiorization method, given that this technique is widely
    used in the literature for solving history matching, in addition to being tested
    with the tabu search algorithm, it has the superiorization approach. Both
    techniques are iterative and use population-based approaches. As the prob-
    lem addressed is an inverse problem often severely underdetermined, several

    possible solutions may exist for its resolution. Due to this, we also propose
    the use of seismic data from the reservoirs, through these data, to verify the
    faults present in the reservoir so that we can use values of smoothness by
    parts to then reduce the number of possible results through a regularization
    relevant to the second optimization criterion of the superior version of the tabu
    search algorithm. Another critical factor in the history matching process is the
    simulation time, which is generally high. Thus, we also propose investigating
    parallelism in the solution using the CPU. The experiments are carried out
    in a 3D reservoir model to find correspondence for the gas, oil, and water
    yield values. The results obtained during the research show that the parallel
    approach decreases the execution time by up to more than 70%. As for the
    result’s precision, the genetic approach obtained better values. However, the
    tabu search and the superiorization method produced very similar values but
    more stable results.

2
  • LANDERSON BEZERRA SANTIAGO
  • Multidimensional Fuzzy Negations and Implications

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • ANNAXSUEL ARAUJO DE LIMA
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • EDUARDO SILVA PALMEIRA
  • REGIVAN HUGO NUNES SANTIAGO
  • RENATA HAX SANDER REISER
  • THADEU RIBEIRO BENÍCIO MILFONT
  • Data: Mar 2, 2023


  • Show Abstract
  • Multidimensional fuzzy sets is a new extension of fuzzy sets on which the membership values of an element in the universe of discourse are increasingly ordered vectors on the set of real numbers in the interval [0, 1]. The main application of this type of set are the multi-criteria group decision making problems, in which, in the n-dimensional case, we have a set of situations, which are always evaluated by a fixed number n of experts. The multidimensional case is used when some of these experts refrain to evaluate some of these situations and, therefore, may be suitable for solving multi-criteria group decision making problems with incomplete information. This thesis aims to investigate fuzzy negations and fuzzy implications on the set of increasingly ordered vectors on [0, 1], i.e. on L∞ ([0, 1]), with respect to some partial order. In this thesis we study partial orders, giving special attention to admissible orders on L∞ ([0, 1]). In addition, some properties and methods to construct and generate such operators from fuzzy negations and fuzzy implications, respectively, are provided (in particular, a notion of ordinal sums of n-dimensional fuzzy negations and ordinal sums of multidimensional fuzzy negations will be proposed with respect to specific partial orders) and we demonstrate that an action of the group of automorphisms on fuzzy implications on L∞ ([0, 1]) preserves several original properties of the implication. Using a specific type of representable multidimensional fuzzy implication, we are able to generate a class of multidimensional fuzzy negations called natural m-negations. In the end, an application in decision-making problems is presented.

3
  • RANMSÉS EMANUEL MARTINS BASTOS
  • Investigation of Models and Algorithms for the Traveling Salesman with Multiple Passengers and High Occupancy Problem

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Mar 27, 2023


  • Show Abstract
  • The Traveling Salesman with Multiple Passengers and High Occupancy Problem is a generalization of the Traveling Salesman Problem that incorporates real-world features, transforming it into a ridesharing problem with routing constraints. In this modality, the salesman offers rides to third parties along the route to share the cost of the trip. Links between cities may contain High-Occupancy tolls, in which the toll is waived if the vehicle is fully occupied. When tolls are charged, those expenses are entirely paid by the salesman. All other costs are shared equally between the salesman and all passengers occupying seats on their respective routes. The objective of the TSMPHOP is to find the Hamiltonian cycle with the lowest cost, calculated by the sum of expenses carried by the salesman. Such features promote efficiency in the use of urban space and the reduction of greenhouse gas emissions, given the incentive for sharing transportation with a larger number of people. This thesis presents the study of this new combinatorial optimization problem, beginning with an analysis of its relationship to other models in the literature. Subsequently, the mathematical formulation of the problem is addressed, with multiple variants for representing its constraints. Finally, algorithms are created to find good-quality solutions in a short amount of time. In order to conduct computational experiments, an artificial instance database is generated, and solution methods are implemented. Ten mathematical models are implemented in the Gurobi solver to establish a benchmark, determining optimal solutions for the instances, and comparing different formulation techniques, including lazy constraints and piecewise-linear functions. Procedures for manipulating solutions and ten heuristic algorithms are also proposed. The algorithms are developed based on the metaheuristics Genetic Algorithm, Memetic Algorithm, Transgenetic Algorithm, and Q-learning reinforcement learning technique. Three computational experiments are conducted: the first controlled by the maximum iteration parameter, the second with an absolute maximum count of objective function evaluations, and the third with a maximum count of objective function evaluations relative to the discovery of the last best solution. The parameter tuning is performed automatically by the irace tool. A statistical analysis based on the Friedman Aligned Ranks test indicated superior performance of the hybrid algorithm combining the Transgenetic Algorithm, Memetic Algorithm, and the Q-learning technique.

4
  • CAMILA DE ARAUJO
  • Enriching SysML-Based Software Architecture Descriptions: A Model-Driven Approach

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • FLAVIO OQUENDO
  • LUCAS BUENO RUAS DE OLIVEIRA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • THAIS VASCONCELOS BATISTA
  • Data: Mar 31, 2023


  • Show Abstract
  • The critical nature of many complex software-intensive systems requires formal architecture descriptions for supporting automated architectural analysis regarding correctness properties. Due to the challenges of adopting formal approaches, many architects have preferred using notations such as UML, SysML, and their derivatives to describe the structure and behavior of software architectures. However, these semi-formal notations have limitations regarding the support for architectural analysis, particularly formal verification. This work investigates how to formally support SysML-based architecture descriptions to enable the formal verification of software architectures. As a result of this research, the main contribution is proposing a model-driven approach (MDD) that provides formal semantics to a SysML-based architectural language, SysADL, through a seamless transformation of SysADL architecture descriptions to the corresponding formal specifications in \piadl, a well-founded theoretically language based on the higher-order typed Pi-calculus. The proposal implementation involves the execution of a 4-phase process: (i) Model-to-Model (M2M) transformation of SysADL models into piadl model; (ii) Model-to-text (M2T) transformation of PI-ADL models into PI-ADL code; (iii) corresponding executable architecture generation, and architecture validation; and (iv) property verification. The work has other associated contributions to support the 4-phase process: (i) a denotational semantics to SysADL as a function of PI-ADL; (ii) a definition of a process to support the automated transformation of SysADL models into piadl models; (iii) The validation of the PI-ADL architecture generated by the MDD transformation to show that it is in accordance with the original SysADL architecture; and (iv) the verification of formal architectural properties using execution traces. The proposal was implemented and validated using a Flood Monitoring System architecture.

5
  • BRUNO FRANCISCO XAVIER
  • Linear logic as a framework

  • Advisor : CARLOS ALBERTO OLARTE VEGA
  • COMMITTEE MEMBERS :
  • CARLOS ALBERTO OLARTE VEGA
  • ELAINE GOUVEA PIMENTEL
  • GISELLE MACHADO NOGUEIRA REIS
  • REGIVAN HUGO NUNES SANTIAGO
  • UMBERTO SOUZA DA COSTA
  • Data: May 19, 2023


  • Show Abstract
  • This thesis investigates the analyticity of proof systems using Linear Logic (LL). Analyticity refers to the property that a proof of a formula F only uses subformulas of F. In sequent calculus, this property is typically established by showing that the cut rule is admissible, meaning that the introduction of the auxiliary lemma A in the reasoning “if A follows from B and C follows from A, then C follows from B” can be eliminated. However, cut-elimination is a complex process that involves multiple proof transformations and requires the use of (semi-)automatic procedures to prevent mistakes. LL is a powerful tool for studying the analyticity of proof systems due to its resource-conscious nature, the focused system, and its cut-elimination theorem. Previous works by Miller and Pimentel have used LL as a logical framework for establishing sufficient conditions for cut-elimination of object logics (OL). However, many logical systems cannot be adequately encoded in LL, particularly sequent systems for modal logics. In this thesis, we utilize a linear nested sequent (LNS) presentation of a variant of LL with subexponentials (MMLL) and demonstrate that it is possible to establish a cut-elimination criterion for a broader class of logical systems. This includes LNS proof systems for classical and substructural multimodal logics, as well as the LNS system for intuitionistic logic. Additionally, we present an in-depth study of the cut-elimination procedure for LL. Specifically, we propose a set of cut rules for focused systems for LL, a variant of LL with subexponentials (SELL) and MMLL. Our research demonstrates that these cut rules are sufficient for directly establishing the admissibility of cut within the focused systems. We formalize our results in Coq, a formal proof assistant, providing procedures for verifying cut-admissibility of several logical systems that are commonly used in philosophy, mathematics, and computer science.

6
  • RAFAEL JULLIAN OLIVEIRA DO NASCIMENTO
  • A framework for systematized correction of Requirements Smells through Techniques of Refactoring

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • FERNANDA MARIA RIBEIRO DE ALENCAR
  • FRANCISCO MILTON MENDES NETO
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • RICARDO ARGENTON RAMOS
  • Data: May 29, 2023


  • Show Abstract
  • Natural Language Requirements Specification is a common type of documentation used in systems development to contain information about requirements. The quality of this information is important for the success of activities, systems development, and requirements management itself. However, Natural Language can lead to ambiguity and other defects that end up compromising Quality Criteria for information about requirements, such as understanding, clarity, completeness and others. A set of these structural defects were studied by Henning Femmer and named as Requirements Smells. Requirements Smells are anomalies related to problems in writing requirements using Natural Language. Furthermore, these anomalies can occur in any Natural Language Specification template. However, despite the literature containing an expressive amount of studies on the behavior and relationship of Requirements Smells with the quality of requirements, there are no studies on ways of systematized correction in Requirements Smells and how these corrections can help restore/achieve  Quality Criteria for requirements specified in Natural Language. The objective of this work is to develop a set of systematized corrections, entitled Refactoring Techniques, to correct Requirements Smells and help to achieve the Quality Criteria. The Refactoring Techniques will be developed following the structure suggested by Martin Fowler, one of the pioneer authors in code refactoring. Experiments will be carried out with participants who will review requirements and identify Requirements Smells and correct these requirements using the proposed Refactoring Techniques. For this, a set of 33 User Stories infected by Requirements Smells will be made available to users. At the end, they will answer questionnaires whose information will be analyzed and
    understood. It is expected that the Refactoring Techniques developed can be effective in correcting and reaching the Quality Criteria and become another alternative solution for Requirements Smells correction.

7
  • JOSÉ DIEGO SARAIVA DA SILVA
  • Entendendo o Impacto de Integração Contínua na Cobertura de Código 

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ROBERTA DE SOUZA COELHO
  • DANIEL ALENCAR DA COSTA
  • GUSTAVO HENRIQUE LIMA PINTO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: May 31, 2023


  • Show Abstract
  • Integração Contínua, em inglês Continuous Integration (CI), é uma prática amplamente adotada na engenharia de software que enfatiza a integração frequente do software por meio de um processo de builds automatizado. Embora tenha sido demonstrado que a CI detecta erros mais cedo no ciclo de vida do software, a relação entre CI e cobertura de código ainda precisa ser esclarecida. Nosso trabalho tem como objetivo preencher essa lacuna investigando os aspectos quantitativos e qualitativos dessa relação.

    No estudo quantitativo, comparamos 30 projetos com CI e 30 projetos que nunca adotaram CI (projetos NOCI) para investigar se a CI está associada a maiores taxas de cobertura de código. Analisamos 1.440 versões de diferentes projetos para identificar tendências na cobertura de código. Nossas descobertas revelam uma associação positiva entre a CI e maiores taxas de cobertura de código.

    Nosso estudo qualitativo consistiu em um survey e uma análise de documentos. A pesquisa revelou várias descobertas significativas, incluindo uma associação positiva entre a integração contínua (CI) e maiores taxas de cobertura de código, indicando o valor da CI na promoção de práticas de teste. Além disso, nossa pesquisa enfatizou a relevância do uso de cobertura de código durante o processo de autoria e revisão, pois isso pode auxiliar na detecção precoce de possíveis problemas ao longo do ciclo de desenvolvimento.

    A análise de documentos se concentrou em temas relacionados à cobertura nas discussões dos \textit{Pull Requests} de projetos que adotam CI. A partir dessa análise, identificamos os principais tópicos associados ao uso da cobertura durante os Pull Requests, o que pode fornecer informações valiosas sobre como os desenvolvedores utilizam a cobertura para aprimorar a qualidade do código. Essas informações são capazes de orientar o desenvolvimento de melhores práticas para o uso da cobertura em projetos que adotam CI, contribuindo para aprimorar a qualidade e a confiabilidade dos produtos de software.

    O nosso trabalho permitiu encontrar percepções sobre a evolução da cobertura de código em projetos que adotam CI, as quais podem auxiliar pesquisadores e profissionais a adotarem ferramentas e práticas para monitorar, manter e, inclusive, aprimorar a cobertura de código.



     
8
  • ALEXANDRE GOMES DE LIMA
  • Improving Legal Rhetorical Role Labeling Through Additional Data and Efficient Exploitation of Transformer Models

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • IVANOVITCH MEDEIROS DANTAS DA SILVA
  • JOSÉ GUILLERMO MORENO
  • LEONARDO CESAR TEONACIO BEZERRA
  • TAOUFIQ DKAKI
  • Data: Jun 29, 2023


  • Show Abstract
  • Legal AI, the application of Artificial Intelligence (AI) in the legal domain, is a research field that comprises several dimensions and tasks of interest. As in other targeted application domains, one of the desired benefits is task automation, which increases the productivity of legal professionals and makes law more accessible to the general public. Text is an important data source in the legal domain, therefore Legal AI has a great interest in the Natural Language Processing advances. This thesis concerns the automation of the Legal Rhetorical Role Labeling (RRL), a task that assigns semantic functions to sentences in legal documents. Legal RRL is a relevant task because it finds information that is useful both by itself and for downstream tasks such as legal summarization and case law retrieval. There are several factors that make legal RRL a non-trivial task, even for humans: the heterogeneity of document sources, the lack of standards, the domain expertise required, and the subjectivity inherent in the task. These complicating factors and the large volume of legal documents justify the automation of the task. Such automation can be implemented as a sentence classification task, i.e. sentences are fed to a machine learning model that assigns a label or class to each sentence. Developing such models on the basis of Pre-trained Transformer Language Models (PTLMs) is an obvious choice, since PTLMs are the current state of the art for many NLP tasks, including text classification. Nevertheless, in this thesis we highlight two main problems with works that exploit PTLMs to tackle the Legal RRL task. The first one is the lack of works that address how to better deal with the idiosyncrasies of legal texts and the typically small size and imbalance of Legal RRL datasets. Almost all related works simply employ the regular fine-tuning strategy  to train models.

    The second problem is the poor utilization of the intrinsic ability of PTLMs to exploit context, which hampers the performance of the models. This thesis aims to advance the current state of the art on the Legal RRL task by presenting three approaches devised to overcome such problems. The first approach relies on a data augmentation technique to generate synthetic sentence embeddings, thus increasing the amount of training data. The second approach makes use of positional data by combining sentence embeddings and positional embeddings to enrich the training data. The third approach, called Dynamically-filled Contextualized Sentence Chunks (DFCSC), specifies a way to produce efficient sentence embeddings by better exploiting the encoding capabilities of PTLMs. The studies in this thesis show that the first two approaches have a limited impact on the performance of the models. Conversely, models based on the DFCSC approach achieve remarkable results and are the best performers in the respective studies. Our conclusion is that the DFCSC approach is a valuable contribution to the state of the art of the Legal RRL task.

9
  • CEPHAS ALVES DA SILVEIRA BARRETO

  • Selection and Labelling Instances for wrapper-based semi-supervised methods

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • JOAO CARLOS XAVIER JUNIOR
  • KARLIANE MEDEIROS OVIDIO VALE
  • LEONARDO CESAR TEONACIO BEZERRA
  • Data: Jul 24, 2023


  • Show Abstract
  • In recent years, the use of Machine Learning (ML) techniques to solve real problems has
    become very common and a technological pattern adopted in plenty of domains. However,
    several of these domains do not have enough labelled data to give ML methods a good
    performance. This problem led to the development of Semi-supervised methods, a type of
    method capable of using labelled and unlabelled instances in its model building. Among
    the semi-supervised learning methods, the wrapper methods stand out. This category of
    methods uses a process, often iterative, that involves: training the method with labelled
    data; selection of the best data from the unlabelled set; and labelling the selected data.
    Despite showing a simple and efficient process, errors in the selection or labelling processes
    are common, which deteriorate the final performance of the method. This research aims
    to reduce selection and labelling errors in wrapper methods to establish selection and
    labelling approaches that are more robust and less susceptible to errors. To this end, this
    work proposes a selection and labelling approach based on classification agreement and a
    selection approach based on distance metric as an additional factor to an already used
    selection criterion (e.g. confidence or agreement). The proposed approaches can be applied
    to any wrapper method and were tested on 42 datasets in Self-training and Co-training
    methods. The results obtained so far indicate that the proposals bring gains for both
    methods in terms of accuracy and F-measure.

10
  • FERNANDO NERES DE OLIVEIRA
  • On new contrapositivisation techniques for (interval-valued) fuzzy (co)implications and their generalizations

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • REGIVAN HUGO NUNES SANTIAGO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • ANDERSON PAIVA CRUZ
  • RENATA HAX SANDER REISER
  • RUI EDUARDO BRASILEIRO PAIVA
  • Data: Jul 28, 2023


  • Show Abstract
  • In this work, we introduce several contrapositivisation operators for fuzzy implications, we present a wide study of each of these operators with respect to the main properties routinely required by fuzzy implications, we prove that the classes of these contrapositivisators are invariant by automorphisms and present some conditions for the $N$-compatibility of the respective contrapositivisations, we propose some construction methods of classes of triangular norms (quasi-overlaps), triangular conorms (quasi-groupings) and aggregation functions from these contrapositivisators; we introduce the Min-Max contrapositivisation technique for fuzzy implications and some of its generalizations; we introduce four contrapositivisation operators for fuzzy coimplications so-called co-upper, co-lower, co-medium and co-($S$,$N$)- contrapositivisators, we characterize these operators from the point of view of the properties that are usually attributed to the fuzzy coimplications, we present sufficient conditions for the $N$-compatibility of the co-upper, co-lower, co-medium and co-($S$,$N$)- contrapositivisations, we show that the classes of co-upper, co-lower, co-medium and co-($S$,$N$)- contrapositivisators are closed under the action of automorphisms and we propose a construction method of triangular conorms from co-($S$,$N$)-contrapositivisators of ($T$,$N$)-coimplications and fuzzy negations; and finally, we propose the classes of interval-valued upper, lower and medium contrapositivisators and broadly characterize each one of them, we show that both classes are invariant by interval-valued automorphisms, we introduce the notion of $N$-compatibility for the respective interval-valued contrapositivisations and we prove that the best interval representations of real-valued upper, lower and medium contrapositivisators are, respectively, interval-valued upper, lower and medium contrapositivisators.

11
  • FELIPE SAMPAIO DANTAS DA SILVA
  • Cloud-Network Slicing-driven Intelligent Mobility Control in Federated 5G Infrastructures

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • DANIEL CORUJO
  • EDUARDO COELHO CERQUEIRA
  • LISANDRO ZAMBENEDETTI GRANVILLE
  • ROGER KREUTZ IMMICH
  • VICENTE ANGELO DE SOUSA JUNIOR
  • Data: Jul 31, 2023


  • Show Abstract
  • The 5th generation of mobile networks (5G) is designed to provide high connectivity capacity in terms of coverage and support for a larger diversity of service types, traffic, and users (User Equipment– UE) with diverse mobility patterns and critical Quality of Service (QoS) requirements. By incorporating new technologies such as Cloud Computing and Mobile Edge Computing (MEC), 5G infrastructures will increase network and cloud capabilities within the Radio Access Networks (RAN) closer to end-users, allowing high content and service delivery flexibility. In this context, new paradigms such as network slicing (NS) have been widely adopted for the ability to enable the infrastructure to deploy services in a personalized and elastic way, promoted through a set of network resource components that can be extended through physical resource virtualization strategies and softwarization. Recently, the Cloud-Network Slicing (CNS) approach was introduced as an alternative to meet the demands of industry verticals, which offer their services across multiple administrative and technological domains distributed across the federated cloud and network infrastructures. In this scenario, characterized by the inevitability of handover between the various cells existing in the RAN, the infrastructure management system must be extended with improved capabilities to maintain the UE experience during mobility events, benefiting from the ability of slicing to orchestrate resources available in a cloud ecosystem to deliver a service with seamless connectivity in a transparent and agile manner. Therefore, it is necessary to rethink traditional mobility management approaches to direct their operating model in infrastructures defined by CNS to advance mobile services on 5G networks. A recent survey of the literature revealed works that promote mobility management in systems defined by NSs and the inexistence of mechanisms driven by CNS. Furthermore, existing mechanisms manage the mobility of entities associated with NSs considering classical models based on signal strength. However, in systems defined by CNS, decision mechanisms require complete knowledge of active CNS instances, their computational and network requirements, operational services, service-consuming nodes, among other aspects. The research developed in this Ph.D. thesis intends to fill this gap and pave 5G systems defined by CNS from an approach with automated and proactive mobility control and management capabilities in 5G systems. The main contributions of this research include: (i) broad review and discussion on quality-oriented handover decision mechanisms in compliance with the critical requirements imposed by 5G verticals in CNS defined systems; (ii) an automated and proactive approach to mobility management guided by CNS, capable of maintaining mobile users of CNS instances always best connected and served, respecting the end-to-end definitions and the high level of isolation; (iii) provide compliance-driven mobility control of CNS resources and UEs QoS requirements to act as a trigger for network re-orchestration events (e.g., mobility load balancing); (iv) intelligent mobility prediction and decision to enable UEs (not necessarily in transit) with seamless and transparent connectivity while selecting the best access point for CNS services.
12
  • MURILO OLIVEIRA MACHADO
  • Investigating the Inclusion of Learning Methods and Mathematical Programming in an Architecture for Metaheuristic Hybridization for Multi-level Decisions on Optimization Problems

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • CAROLINA DE PAULA ALMEIDA
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • GUSTAVO DE ARAUJO SABRY
  • ISLAME FELIPE DA COSTA FERNANDES
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Aug 2, 2023


  • Show Abstract
  • The hybridization of metaheuristics is a topic that several researchers have studied due to its potential to produce more efficient heuristics than those based on a single technique. However, hybridization is not easy, as there are several ways to operationalize it. The task becomes even more challenging when three or more metaheuristic methods need to hybridize or when someone wants to add Mathematical Programming methods, thus creating matheuristics. Various methods have been proposed to hybridize metaheuristics, including some techniques that automate hybridization, such as multi-agent architectures. A few of these architectures use learning techniques, and an even smaller number deal with matheuristics. This work extends the capabilities of the Multi-agent Architecture for Metaheuristic Hybridization by including learning techniques and Mathematical Programming. The application of learning techniques is innovative, considering the agents' choice of heuristics to apply at different search stages. This work proposes a new form of hierarchical hybridization for Combinatorial Optimization problems with multiple decision levels. The algorithmic proposals are tested on the Traveling Car Renter with Passengers and the Cable Routing Problem in Wind Farms. These problems belong to the NP-hard class and require decision-making at multiple levels. In the case of the Traveling Car Renter with Passengers, there are three decision levels: route, car types, and customers' transport demand. Cable routing in wind farms requires decisions concerning the cable locations and the cable type used in each section. The experiments for the Traveling Car Renter with Passengers were conducted on three classes of instances, totaling ninety-nine test cases ranging from four to eighty cities, two to five vehicles, and ten to a hundred forty people requiring transportation. Experiments for The Cable Routing Problem in Wind Farms involved a set of two hundred instances. These instances are simulations of real situations developed in collaboration with domain experts. The approaches proposed in this work are compared to state-of-the-art algorithms for both problems.

13
  • JHOSEPH KELVIN LOPES DE JESUS
  • Unsupervised Automations for a Pareto-Front-based Dynamic Feature Selection

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ALUISIO IGOR REGO FONTES
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • BRUNO MOTTA DE CARVALHO
  • DANIEL SABINO AMORIM DE ARAUJO
  • Data: Aug 25, 2023


  • Show Abstract
  • Several feature selection strategies have been developed in the past decades, using different criteria to select the most relevant features. The use of dynamic feature selection, however, has shown that using multiple criteria simultaneously to determine the best subset of features for similar instances can provide encouraging results. While the use of dynamic selection has alleviated some of the limitations found in traditional selection methods, the exclusive use of supervised evaluation criteria and the manual definition of the number of groups to be used leads to limitations of complex problem analysis in unsupervised scenarios. In this context, this thesis proposes three strands of the dynamic feature selection approach based on the pareto front. The first is related to the inclusion of unsupervised criteria in the base version of PF-DFS/M. The second (PF-DFS/P) and third (PF-DFS/A) strands are variations of the base version, where they include, respectively, partial and full automation of the definition of the number of groups to be used in the preprocessing process through the use of an internal validation index ensemble. The automation of the hyperparameter concerning the number of groups allows, instead of an arbitrary choice, mechanisms to be used that can help researchers to deal with unlabeled databases, or even constitute a deeper analysis under labeled databases. Additionally, an analysis of PF-DFS against noisy data scenarios was proposed. In the investigative analyses real and artificial datasets were used, where the following were evaluated: (I) the performance of PF-DFS in terms of stability and robustness, (II) the behavior of PF-DFS with the inclusion of unsupervised evaluation criteria, and (III) the behavior of PF-DFS with partial and full automation regarding the number of groups.

14
  • THIAGO PEREIRA DA SILVA
  • Uma abordagem baseada em aprendizado online por agrupamento para dimensionamento de VNF na borda

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • FLAVIA COIMBRA DELICATO
  • FREDERICO ARAUJO DA SILVA LOPES
  • NELIO ALESSANDRO AZEVEDO CACHO
  • PAULO DE FIGUEIREDO PIRES
  • THAIS VASCONCELOS BATISTA
  • Data: Sep 15, 2023


  • Show Abstract
  • Recentemente, foram propostas plataformas de computação de borda (do inglêsEdge
    Computing) para gerenciar aplicações emergentes com alta carga computacional e baixos
    requisitos de tempo de resposta. De modo a proporcionar mais agilidade e flexibilidade na
    prestação de serviços e, em simultâneo, reduzir os custos de implantação para os prove-
    dores de infraestrutura, tecnologias como a Virtualização das Funções de Rede (NFV, do
    inglêsNetwork Functions Virtualization) são frequentemente utilizadas em ambientes de
    produção na borda da rede. NFV promove o desacoplamento de hardware e funções de
    rede usando tecnologias de virtualização, permitindo que elas funcionem em máquinas
    virtuais ou contêineres como software. As funções de rede ou mesmo funções de camadas
    superiores são implementadas como entidades de software chamadas Funções de Rede
    Virtual (VNFs, do inglêsVirtual Network Functions). A integração dos paradigmas de
    Computação de Borda e NFV, como proposto pelo ETSI MEC, permite a criação de um
    ecossistema para aplicações 5G. Tal integração permite a criação de cadeias de VNF,
    representando serviços ponta a ponta para os usuários finais e sua implantação em nós
    de borda. Uma cadeia de funções de serviço (SFC, do inglêsService Function Chain-
    ing) compreende um conjunto de VNFs encadeadas em uma determinada ordem, onde
    cada VNF pode ser executada em um nó de borda diferente. Os principais desafios neste
    ambiente dizem respeito ao provisionamento dinâmico e ao desprovisionamento de recur-
    sos distribuídos na borda para executar as VNFs e atender às exigências da aplicação,
    otimizando o custo para o fornecedor da infraestrutura. Este trabalho apresenta uma
    abordagem híbrida de dimensionamento automático para o dimensionamento dinâmico
    das VNFs no ambiente de computação de borda. Tal abordagem de autodimensionamento
     emprega uma técnica de aprendizagem de máquinas em conjunto on-line que consiste no
    agrupamento de diferentes modelos de aprendizagem de máquinas on-line que preveem a
    carga de trabalho futura das VNFs. A arquitetura da abordagem proposta segue a ab-
    stração do MAPE-K (do inglêsMonitor-Analyze-Plan-Execute over a shared Knowledge)
    para ajustar dinamicamente o número de recursos em resposta às mudanças de carga de
    trabalho. Esta abordagem é inovadora porque prevê proativamente a carga de trabalho
    para antecipar ações de dimensionamento e se comporta de forma reativa quando o modelo
    de predição não atende a uma qualidade desejada. Além disso, nossa solução não requer
    nenhum conhecimento prévio do comportamento dos dados, o que a torna adequado para
    uso em diferentes contextos. Também desenvolvemos um algoritmo para dimensionar as
    instâncias de VNF, utilizando uma estratégia para definir quantos recursos devem ser
    alocados ou desalocados durante uma ação de dimensionamento. Finalmente, avaliamos o
    método de aprendizado por agrupamento e o algoritmo proposto, comparando o desem-
    penho das predições e a quantidade de ações de dimensionamentos e violações do Acordo
    de Nível de Serviço (SLA, do inglêsService Level Agreement).
15
  • ELIEZIO SOARES DE SOUSA NETO
  • O Efeito de Integração Contínua no Desenvolvimento de Software: Uma Investigação Causal

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • DANIEL ALENCAR DA COSTA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCELO DE ALMEIDA MAIA
  • RODRIGO BONIFACIO DE ALMEIDA
  • SERGIO QUEIROZ DE MEDEIROS
  • UIRA KULESZA
  • Data: Sep 19, 2023


  • Show Abstract
  • Integração Contínua (Continuous Integration—CI) é uma técnica de engenharia de software comumente mencionada como um dos pilares das metodologias ágeis. CI tem como principal objetivo reduzir o custo e o risco da integração de código entre times de desenvolvimento. Para tal se preconiza a realização de commits frequentes para integrar o trabalho dos desenvolvedores em um repositório de código e a frequente verificação de qualidade através de builds e testes automatizados. Através do uso de CI espera-se que os times de desenvolvimento possam detectar e corrigir erros rapidamente, melhorando a produtividade dos times e a qualidade dos produtos de software desenvolvidos entre outros benefícios apontados por pesquisadores e praticantes. Estudos anteriores sobre o uso de CI apontam diversos benefícios em diversos aspectos do desenvolvimento de software, entretanto tais associações não estão mapeadas como um todo e também não são suficientes para concluir que CI seja de fato a causa de tais resultados.

    Portanto, este trabalho tem como objetivo investigar empiricamente tais efeitos da adoção de CI no desenvolvimento de software sob uma perspectiva causal. Primeiro, nós realizamos uma revisão sistemática de literatura para catalogar os achados de estudos que avaliaram empiricamente os efeitos da adoção de CI. Após explorar o conhecimento já documentado conduzimos dois estudos com o objetivo de aprofundar a compreensão a respeito de dois desses aspectos supostamente afetados pela adoção de CI: qualidade de software e a produtividade dos times de desenvolvimento. Nós pretendemos responder se há uma relação causal entre a adoção de CI e os efeitos reportados na literatura. Para isso utilizamos causal Direct Acyclic Graphs (causal DAGs) combinado a duas outras estratégias: revisão de literatura e um estudo de mineração de repositório de software (Mining Software Repository—MSR). Nossos resultados mostram um panorama dos efeitos de CI reportados na literatura e apontam que há de fato uma relação causal entre CI e qualidade de software.

16
  • ANDRE LUIZ DA SILVA SOLINO
  • An Autonomic Strategy for Autoscaling in Smart City Platform Infrastructures

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • ANDRÉ GUSTAVO DUARTE DE ALMEIDA
  • CARLOS ANDRE GUIMARÃES FERRAZ
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 23, 2023


  • Show Abstract
  • Smart city application development platforms receive, store, process, and display large volumes of data from different sources and have several users, such as citizens, visitors, government, and companies. The underlying computing infrastructure to support these platforms must deal with the highly dynamic workload of the different applications, with simultaneous access from multiple users and sometimes working with many interconnected devices. Such an infrastructure typically encompasses cloud platforms for data storage and
    computation, capable of scaling up or down according to the demands of applications. This thesis proposes an autonomic approach for autoscaling smart city platform infrastructures. The approach follows the MAPE-K control loop to dynamically adjust the infrastructure in response to workload changes. It supports scenarios where the number of processing requests is unknown a priori. The performance of the approach has been evaluated upon the computational environment that supports Smart Geo Layers (SGeoL), a platform for developing real-world smart city applications.

17
  • MARCELO RÔMULO FERNANDES
  • Compreendendo Desafios e Recomendações em Educação DevOps

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • ANDRE MAURICIO CUNHA CAMPOS
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ITAMIR DE MORAIS BARROCA FILHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • UIRA KULESZA
  • VINICIUS CARDOSO GARCIA
  • Data: Nov 30, 2023


  • Show Abstract
  • DevOps representa um conjunto de práticas que integram o desenvolvimento e a operação de software amplamente adotado na indústria atualmente. Ela envolve a implementação de vários conceitos, tais como, cultura de colaboração, entrega contínua e infraestrutura como código. A alta demanda por profissionais DevOps exige ajustes não triviais em cursos tradicionais de engenharia de software e metodologias educacionais. Por ser uma área nova, DevOps trouxe desafios significativos para a academia em relação a temas de pesquisa e estratégias de ensino. Do ponto de vista educacional, é essencial entender como os cursos existentes ensinam os conceitos e práticas fundamentais do DevOps. Nesta tese, realizamos estudos empíricos para investigar os desafios existentes dos cursos de DevOps e recomendações para superá-los. Entender tais desafios e recomendações pode contribuir para melhorar o aprendizado dos conceitos e práticas de DevOps. Em nosso primeiro estudo, apresentamos uma revisão sistemática da literatura que visa identificar desafios e recomendações para o ensino de DevOps. Nossas descobertas mostram um total de 73 desafios e 85 recomendações organizadas em sete categorias (pedagogia, currículo, avaliação, ferramenta, conceitos de DevOps, preparação de aula, configuração de ambiente) de um total de 18 artigos selecionados. Também discutimos como as recomendações existentes abordam os desafios encontrados no estudo, contribuindo assim para a preparação e execução de cursos de DevOps. Por fim, investigamos se os desafios e recomendações são específicos para o ensino de DevOps. Nosso segundo estudo envolveu entrevistas com 14 educadores DevOps de diferentes universidades e países, com o objetivo de identificar os principais desafios e recomendações para o ensino de DevOps. O estudo identificou 83 desafios, 185 recomendações e vários vínculos de associação e conflitos entre eles. Tais resultados podem ajudar educadores a planejar, executar e avaliar cursos de DevOps. Elas também destacam várias oportunidades para os pesquisadores proporem novos métodos e ferramentas para ensinar DevOps. Os estudos restantes desta tese visam avaliar a utilidade dos desafios e recomendações reportados para a educação DevOps na preparação de novos cursos e melhoria dos existentes. Também planejamos analisar o impacto dos desafios e recomendações do ponto de vista da indústria.


2022
Dissertations
1
  • TUANY MARIAH LIMA DO NASCIMENTO
  • Using semi-supervised learning models for creating anew fake news dataset from Twitter posts: a case study on Covid-19 in the UK and Brazil

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • LAURA EMMANUELLA ALVES DOS SANTOS SANTANA DE OLIVEIRA
  • MARJORY CRISTIANY DA COSTA ABREU
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Jan 14, 2022


  • Show Abstract
  • Fake News has been a big problem for society for a long time. It has been magnified,reaching worldwide proportions, mainly with the growth of social networks and instantchat platforms where any user can quickly interact with news, either by sharing, throughlikes and retweets or presenting hers/his opinion on the topic. Since this is a very fastphenomenon, it became humanly impossible to manually identify and highlight any fakenews. Therefore, the search for automatic solutions for fake news identification, mainlyusing machine learning models, has grown a lot in recent times, due to the variety oftopics as well as the variety of fake news propagated. Most solutions focus on supervisedlearning models, however, in some datasets, there is an absence of labels for most of theinstances. For this, the literature presents the use of semi-supervised learning algorithmswhich are able to learn from a few labeled data. Thus, this work will investigate the use ofsemi-supervised learning models for the detection of fake news, using as a case study theoutbreak of the Sars-CoV-2 virus, the COVID-19 pandemic.

2
  • ANDERSON EGBERTO CAVALCANTE SALLES
  • Internal Fault Detection in SCIG using Intelligent Systems.

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • ALVARO DE MEDEIROS MACIEL
  • ANNE MAGALY DE PAULA CANUTO
  • IVANOVITCH MEDEIROS DANTAS DA SILVA
  • LUCIANO SALES BARROS
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • Data: Jan 31, 2022


  • Show Abstract
  • The objective of this study is the implementation and evaluation of intelligent systems for detecting the turn-turn and turn-ground faults in wind turbines based on the squirrel-cage induction machine. Therefore, it is proposed to implement two machine learning models, an artificial neural network and a convolutional neural network, in order to learn the characteristics of the stator electrical currents and differentiate
    healthy and damaged machines. The systems are trained with artificial data from simulations.

3
  • FAGNER MORAIS DIAS
  • FormAr: Software Architecture Formalization for Critical Smart Cities Applications

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • THAIS VASCONCELOS BATISTA
  • FLAVIO OQUENDO
  • Data: Feb 11, 2022


  • Show Abstract
  • Errors during the software development may give rise to flaws in the system that can cause important damages. One of the most important stages in the software development process is modelling the system architecture, possibly using software architecture description languages~(ADLs). The ADLs currently adopted by industry for software-intensive systems are mostly semi-formal and essentially based on SysML and specialized profiles. These ADLs allow describing the structure and the behavior of the system. Besides, it is possible to generate executable models or generate code in a target programming language and simulate its behaviour. This, however, does not constitute proof that the system is correct or safe. This work proposes a novel approach for empowering SysML-based ADLs with formal verification supported by model checking. It presents a CSP-based semantics to SysADL models. Furthermore, this work presents how correctness properties can be formally specified using CSP, and how the FDR4 refinement model-checker can verify these correctness properties. Finally, we present the new extension to SysADL studio that allows the automated transformation from SysADL architecture descriptions to CSP processes and the verification of important system correctness properties. The whole approach is illustrated via a case study, which is also part of this document. This case study demonstrates the usefulness of our approach in practice.

4
  • QUÉZIA EMANUELLY DE OLIVEIRA SOUZA
  • Traveling Salesman Dealer Problem

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 14, 2022


  • Show Abstract
  • In this reserach, the Traveling Tradesman Problem (TTP) is proposed, a variant of the Traveling Purchaser Problempreviously not described in the literature. In this problem there are a set of vertices, which act as markets, where the tradesman can buy or sell goods. Thus, he seeks to buy a certain product in one city and sell it in another, so that this operation provides profit. The purpose of the problem is to determine a Hamiltonian cycle that visits all the vertices of a subset just once, carrying out purchase and sale operations, in order to maximize the profit obtained. It is proposed a detailed description of the problem, the development of instances for it, in addition to two solution metaheuristics in order to obtain competitive results, one GRASP and one Transgenetic algorithm, which were tested in instances ranging from 50 to 350 vertices and, finally, From the results obtained, it was possible to conclude that the transgenetic approach was able to find better results than GRASP, although it required a higher processing time.

5
  • VITOR RODRIGUES GREATI
  • Hilbert-style formalism for two-dimensional notions of consequence

  • Advisor : JOAO MARCOS DE ALMEIDA
  • COMMITTEE MEMBERS :
  • REVANTHA RAMANAYAKE
  • CARLOS CALEIRO
  • JOAO MARCOS DE ALMEIDA
  • SÉRGIO ROSEIRO TELES MARCELINO
  • UMBERTO RIVIECCIO
  • YONI ZOHAR
  • Data: Feb 21, 2022


  • Show Abstract
  • The present work proposes a two-dimensional Hilbert-style deductive  formalism (H-formalism) for B-consequence relations, a class of two-dimensional logics that generalize the usual (Tarskian, one-dimensional) notions of logic. We argue that the two-dimensional environment is appropriate to the study of bilateralism in logic, by allowing the primitive judgements of assertion and denial (or, as we prefer, the cognitive attitudes of acceptance and rejection) to act on independent but interacting dimensions in determining what-follows-from-what. In this perspective, our proposed formalism constitutes an inferential apparatus for reasoning over bilateralist judgments. After a thorough description of the inner workings of the proposed proof formalism, which is inspired by the one-dimensional symmetrical Hilbert-style systems, we provide a proof-search algorithm for finite analytic systems that runs in at most exponential time, in general, and in polynomial time when only rules having at most one formula in the succedent are present in the concerned system. We delve then into the area of two-dimensional non-deterministic semantics via matrix structures containing two sets of distinguished truth-values, one qualifying some truth-values as accepted and the other, as rejected, constituting a semantical path for bilateralism in the two-dimensional environment. We present an algorithm for producing analytic two-dimensional Hilbert-style systems for sufficient expressive two-dimensional matrices, as well as some streamlining procedures that allow to considerably reduce the size and complexity of the resulting calculi. For finite matrices, we should point out that the procedure results in finite systems. In the end, as a case study, we investigate the logic of formal inconsistency called mCi with respect to its axiomatizability in terms of Hilbert-style systems. We prove that there is no finite one-dimensional Hilbert-style axiomatization for this logic, but that it inhabits a two-dimensional consequence relation that is finitely axiomatizable by a finite two-dimensional Hilbert-style system. The existence of such system follows directly from the proposed axiomatization procedure, in view of the sufficiently expressive 5-valued non-deterministic bidimensional semantics available for that two-dimensional consequence relation.
6
  • CARLOS ANTÔNIO DE OLIVEIRA NETO
  • Hibersafe: Curating StackOverflow for Hibernate Exception-related Bugs

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • EIJI ADACHI MEDEIROS BARBOSA
  • ROBERTA DE SOUZA COELHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • UIRA KULESZA
  • Data: Mar 29, 2022


  • Show Abstract
  •  

    Hibernate is a popular object-relational mapping framework for Java used to support data persistence. It provides code annotations that are processed and the persistence process occurs. The way annotations are processed, however, is not easy to understand for most of software developers who use this framework. Also, its documentation appears to be incomplete with regard to exceptional behaviors that occur with the use of annotations. Therefore, this work seeks to provide ways to help developers to better understand and then fix exceptionalrelated bugs that may arise when using Hibernate annotations. In the proposed approach, the crowd knowledge provided by StackOverflow - in this case, questions and its answers about Hibernate - is used by a tool, called Hibersafe, which aims to help developers find better solutions to the exception-related bugs they face and identify the possible annotation-exception relationship that may have caused it. We compared the tool with the traditional approach using the Google search engine, the main source of information used by developers when an error occurs. Our tool was more efficient and accurate on tested scenarios when compared to Google. It showed that it could be used as a sort of curator for Hibernate exception-related bugs.

7
  • EMÍDIO DE PAIVA NETO
  • Opportunistic flow encryption between programmable data planes through in-band signaling

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • MICHELE NOGUEIRA LIMA
  • RAMON DOS REIS FONTES
  • ROGER KREUTZ IMMICH
  • Data: May 4, 2022


  • Show Abstract
  • The Software-Defined Networking (SDN) paradigm has been widely used in diverse ecosystems as enabler for the management of heterogeneous administrative domains, extend programmable resources to intra-domain networks, or even compose cloud-native network architectures. On the other hand, while it can support the ability of next-generation networks to adapt to new protocols, SDN increases the scope of attack vectors to the network, resulting in several security issues. From this point of view, control applications running atop the SDN controller are responsible for establishing secure connections between the underlying node pairs. The secure exchange of cryptographic keys, so that two interconnected nodes can communicate securely over a public channel, represents a well-known challenge in symmetric cryptography systems field of research. The Diffie–Hellman (DH) and Advanced Encryption Standard (AES) stands to a widely adopted solution for exchanging cryptographic keys and encrypting traffic between nodes over untrusted networks. However, traditional cryptographic implementations impose high computational costs and key management risks, which can result to problems in the centralized control plane of the SDN network. This research sets out by exploring the Programming Protocol-independent Packet Processors (P4) paradigm, and proposes the dh-aes-p4 as the first solution for exchanging DH keys with AES adapted tailored to P4-based SDN devices. Although there exist similar cases in the literature, this work distinguishes itself as a new, low-cost, granular (based on network flows) and transparent alternative.

8
  • RAMON WILLIAMS SIQUEIRA FONSECA

  • Collaborative Requirements Elicitation: An Engagement-Focused Method

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • ISABEL DILLMANN NUNES
  • LYRENE FERNANDES DA SILVA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • MARILIA ARANHA FREIRE
  • Data: May 5, 2022


  • Show Abstract
  • Requirements Engineering is a fundamental and important step to achieve the success of a project, because, in addition to providing means to achieve goals imposed in the project, it also works on its maintenance over time. Requirements Engineering works on the understanding and perception of unique contexts and particularities to interpret the actions to which the problem is inserted. For this reason, the requirements elicitation phase cannot be seen only as a technological problem, since in this activity the social context is more critical than in the specification, design and programming phase. In this sense, communication comes in as an important factor to be taken into account throughout this process. Collaborative processes aim to make communication more efficient, however, studies point to limitations and recurring difficulties in maintaining clear, unambiguous communication between team members during the RE process. Considering this problem, it is possible to perceive from the literature that the lack of engagement of those involved in the requirements elicitation process affects team communication and collaboration. In addition, it is observed that Requirements Engineering has limitations in tools and mechanisms to measure and control the level of engagement of the software team. Since stakeholder engagement cannot be managed, it cannot be measured. From this demand, the objective of this work is to help incorporate the engagement of those involved in the collaborative requirements elicitation process through resources present in project management tools. For this, an engagement method was created in order to increase stakeholder motivation levels during requirements elicitation. The method was applied in two phases in classes of the Bachelor's Degree in Information Technology at the Universidade Federal do Rio Grande do Norte, as a result we can observe that the engagement method helped to guide the discussion of requirements elicitation, optimized the team's communication and most students felt engaged during the elicitation process.

9
  • PAULO LEONARDO SOUZA BRIZOLARA
  • Heterogeneity in Discovery Systems: Survey e Uma Solução Descentralizada para Descoberta Integrada

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • JULIO CESAR DOS REIS
  • LEONARDO CUNHA DE MIRANDA
  • MONICA MAGALHAES PEREIRA
  • Data: May 30, 2022


  • Show Abstract
  • In distributed systems, the first step to establish communication with another device is to know its address, that is, to locate it. To locate services or resources automatically, discovery systems have been applied to diverse environments and usage contexts, from wireless sensor networks (WSNs) and peer-to-peer systems, to high processing clusters and cloud systems. The great diversity among usage contexts and application needs has led to the development of specialized discovery protocols, often incompatible with each other. This incompatibility prevents discovery across heterogeneous environments or protocols, restricting the services accessible to a given device. Therefore, to address these limitations, it is necessary to provide discovery solutions that integrate heterogeneous discovery environments and protocols. This in turn requires understanding: in which aspects these environments and protocols vary, and also what aspects they have in common. To address this issue, this paper presents a review of secondary studies from the literature which address service discovery and resource discovery across different environments, i.e. a tertiary study on the topic. Based on this review, a solution was developed to integrate the service discovery across heterogeneous discovery environments and protocols. A proof-of-concept of this solution was implemented, along with two discovery mechanisms: one to local service discovery and the other to decentralized discovery over the Internet. To evaluate the feasibility of the solution and analyze how these mechanisms interact with each other, a controlled experiment was conducted in a virtual network environment. Despite limitations and challenges that still remain, this research can contribute to the understanding of discovery systems, in what they have in common and in their points of variation, and move towards the “universal discovery” of services, which may allows the construction of new kinds of applications.

10
  • EDIR LUCAS DA SILVA ICETY BRAGA
  • A Catalog for Elicitation and Validation of Collaboration Requirements

  • Advisor : LYRENE FERNANDES DA SILVA
  • COMMITTEE MEMBERS :
  • LYRENE FERNANDES DA SILVA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ISABEL DILLMANN NUNES
  • MARIA LENCASTRE PINHEIRO DE MENEZES E CRUZ
  • Data: Jul 13, 2022


  • Show Abstract
  • To be defined

11
  • JAKELINE BANDEIRA DE OLIVEIRA
  • An study on the technological support for audio edition in audiovisual production for digital social media by non-specialists

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • BRUNO SANTANA DA SILVA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • TICIANNE DE GOIS RIBEIRO DARIN
  • Data: Jul 26, 2022


  • Show Abstract
  • A greater availability of digital devices, such as smartphones, and of internet connection associated with human need for communication have favored a significant increase in consumption and production of audiovisual content in/for digital social media. If in traditional media the content production was under the responsibility of specialist professionals, in digital social media the performance of non-specialist producers (people without specific training and experience) began to gain space. However, this new audience performance in the audiovisual production process is still little known. Thus, this work aims to (1) investigate the process of production of audiovisual content by non-specialist producers for their own channels on YouTube and Instagram; and (2) to assess whether and how usability of the Audacity audio editing software may be hindering the audiovisual production process by non-experts. Three studies were carried out to achieve these goals. Study I was a semi-structured interview with non-specialist producers about their content production processes for their YouTube and Instagram channels. Study II was a questionnaire (survey) on the demands of content producers for digital social media, particularly those related to difficulties in editing videos and audios. Study III evaluated usability of the Audacity software with a usability test and the SUS questionnaire from the observation of volume, cutting, noise and distortion editing tasks in audio. As main results, it was identified that non-specialist producers tend to get involved in various activities of the content production process for digital social media, in particular in the edition of what was recorded. Some of them, however, reported avoiding editing recorded content. Among the various demands reported, content producers claimed to face more difficulties in editing audio than videos. When evaluating Audacity's interface, it was identified that usability problems may indeed be related to editing difficulties reported by content producers. Thus, it became clear that there are opportunities for improvement in the Audacity interface to adequately support non-specialist content producers for digital social media.

12
  • LUCIANO ALEXANDRE DE FARIAS SILVA
  • AUTOMATED GENERATION OF VERIFIED COMPETITIVE HARDWARE

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • JULIANO MANABU LYODA
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MONICA MAGALHAES PEREIRA
  • Data: Jul 26, 2022


  • Show Abstract
  • The complexity of development and analysis is inherent to systems in general, especially to concurrent systems. When we work with critical systems this becomes much more evident, as an inconsistency is usually associated with a high cost. Thus, the sooner we can identify an inconsistency in the design of a system and remove it, the lower its cost. For this reason, it is common to use the most varied strategies to reduce the difficulty and problems faced in this process. One of these strategies is the use of formal methods, which can use process algebra to specify and analyze competing systems, improving the understanding of the project and enabling the identification of possible inconsistencies even in the initial stages of the project, ensuring the accuracy and correction of the system. specified. This work presents a tool for automatic translation of the main operators of the Communicating Sequential Processes (csp) process algebra into the hardware description language vhsic hardware description language (vhdl). csp is a language that allows us to perform a formal description of a concurrent system. vhdl is a hardware description language that can be compiled on a Field Programmable Gate Arrays (FPGA) board. Our automatic hardware generation tool is validated by a case study of an intelligent elevator control system. We present its formal specification in csp and then its translation into a vhdl.

13
  • LAVINIA MEDEIROS MIRANDA
  • LLVM-ACT: A profiling-based tool for selection of an approximate computing technique

  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • IVAN SARAIVA SILVA
  • JORGIANO MARCIO BRUNO VIDAL
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • SILVIO ROBERTO FERNANDES DE ARAUJO
  • Data: Jul 29, 2022


  • Show Abstract
  • Approximate Computing is currently an emerging paradigm that seeks to replace some data accuracy with aspects such as performance and energy efficiency. At the software level, there are tools within this scope that apply some approximate computation techniques. However, these tools are limited in covering only some specific scope, applying only one of the techniques, and/or needing manual annotations on applications. The current state of the art still has open questions, such as wether application features influentiates the technique's choice; what would be the most appropriate technique for each particular context. Thus, this dissertation proposes the implementation of a tool that, according to the application profiling, chooses the most appropriate approximate computing technique to be applied. The tool uses the LLVM compilation infrastructure, where each step is implemented in the form of an LLVM Pass of code analysis or transformation. In addition to the Profiler, it was also implemented three approximate computing techniques and the experimental results show the technique chosen by the tool presents a balance between error rate and speedup.

14
  • MARCELO MAGALHÃES DRUMMOND DIAS
  • Ethics in Autonomous Multi-Agents Systems

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANDRÉ CARLOS PONCE DE LEON FERREIRA DE CARVALHO
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • FLAVIUS DA LUZ E GORGONIO
  • Data: Sep 9, 2022


  • Show Abstract
  • In the current moment of humanity, working to have ethics in Autonomous Multi-Agent
    Systems has become mandatory given the evolution of this technology in recent years.
    Humans have been deceiving themselves since the species began, leading us to ask if this is
    already replicated by computer systems, if it is possible to detect the machine's lie, if they
    commit deception and how to identify this attitude.
    This research focuses on the creation of the Severus agent with the aim of using Artificial
    Intelligence (AI) to supervise AI. An agent that makes in its programming a great cooperation
    and collaboration of the ethics considered in exact sciences with all the approaches of the
    humanities-social sciences and that has in the implantation and implementation an evident
    bottom-up approach. A work designed to adjust the level of autonomy of agents in the system,
    essentially observing respect for human rights, the public good as paramount and the use of
    shared intelligence in the service of improving good coexistence. In this sense, the agents
    work under human command, notably having the Requirements Engineer responsible for the
    ethical control of the systems in his great function of negotiating the possible balance with
    the stakeholders.
    Speaking in simple language, towards those who work, live and love in the most diverse
    societies, the primordial of ethics in this new post-human era is to consider that AI will be a
    fundamental pillar to define if humanity will have a future. This research concludes that to
    succeed and to have good ethics in Multi-Agent Systems, we must have an active
    participation in all decisions, learn and respect the psychology of the new diversity in teams
    of humans and machines and contribute to the formation of skilled leaders who lead an AI
    indeed cooperative and working in the service of humanity. This conclusion will always be
    dynamic, as the agents present great differences in behavior in the different environments,
    being necessary an effective work of the Requirements Engineer to adapt the matrix of results
    of the teams for a good option in the dominant strategy avoiding the threats that this new era
    of technological gadgets represents.

15
  • JAINE RANNOW BUDKE
  • Face Biometrics for Differentiating Typical Development and Autism Spectrum Disorder: a methodology for collecting and evaluating
    a dataset

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • MARJORY CRISTIANY DA COSTA ABREU
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Sep 16, 2022


  • Show Abstract
  • Autism spectrum disorder (ASD) is a neuro-developmental disability marked by deficits in communicating and interacting with others. The standard
    protocol for diagnosis is based on fulfillment of a descriptive criteria, which
    does not establish precise measures and influence the late diagnosis. Thus,
    new diagnostic approaches should be explored in order to better standardise
    practices. The best case scenario would be to have a reliable automated
    system that indicates the diagnosis with an acceptable level of assurance.
    At the moment, there are no publicly available representative open-source
    datasets with the main aim of this diagnosis. This work proposes a new methodology for collecting a Face Biometrics dataset with the aim to investigate the differences in facial expressions of ASD and Typical Developmental (TD) people. Thus, a new dataset of facial images was collected from YouTube videos, and computer vision-based techniques were used to extract image frames and filter the dataset. We have also performed initial experiments using classical supervised learning models as well as ensembles and managed to archive promising results.

16
  • ELTONI ALVES GUIMARÃES
  • A Study to Identify and Classify Ambiguities in User Stories Using Machine Learning

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • IVANOVITCH MEDEIROS DANTAS DA SILVA
  • RICARDO ARGENTON RAMOS
  • Data: Sep 29, 2022


  • Show Abstract
  • Ambiguity in requirements writing is one of the most common defects found in requirements documents. There are a variety of concepts about what is ambiguity in requirements and to identify ambiguity one must better understand each concept. Ambiguity can compromise the quality of User Stories and can be present in requirements written in natural language. In the literature, there are few studies that investigate the potential of Machine Learning algorithms to classify ambiguity in User Stories. This dissertation aims to propose an approach to identify and classify ambiguity in User Stories through the use of Machine Learning algorithms. Thus, a checklist was developed to help in the identification of ambiguities in User Stories and a Machine Learning approach will be used using two algorithms: (i) Support Vector Machine; (ii) Random Forest. Each model generated by the algorithm will be evaluated and compared.


17
  • JOÃO VICTOR LOPES DA SILVA

  • Aspect-Oriented Approach to Monitoring Platforms for Cities Smart

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FREDERICO ARAUJO DA SILVA LOPES
  • ROSSANA MARIA DE CASTRO ANDRADE
  • Data: Oct 26, 2022


  • Show Abstract
  • Platforms for developing smart cities applications are responsible for providing various services to facilitate application development. Typically, such platforms manage a variety of applications, handle a large volume of data and serve a significant number of users that generate a high volume of requests. The large amount of requests often causes overload on the platform, degrading the quality of service provided to users. Also, as smart city platforms process requests for operations on large volumes of geographic data, it is important to monitor the databases to see if there are any limitations to processing large amounts of data in an acceptable time. Therefore, in this context, it is necessary to monitor the underlying computational infrastructure on which platforms for smart cities and applications are deployed, as well as monitor operations regarding access to geographic data stored in the databases used by the platforms. Aiming to address this problem, the objective of this work is to propose and implement a non-invasive strategy to enable the monitoring of platforms for smart cities, including the monitoring of the underlying infrastructure, as well as the operations directed to the databases. The proposed strategy is based on the aspect-oriented programming paradigm so that it is possible to monitor the computational infrastructure without the need to intervene on the implementation of the platform or generate coupling with respect to monitoring. This work also presents the implementation of the strategy and its instance in the monitoring of the Smart Geo Layers (SGeoL) platform, as well as an evaluation of the proposed monitoring strategy.

18
  • GABRIEL ARAÚJO DE SOUZA
  • Using Federated Learning Techniques to Improve Artificial Intelligence Models in the Context of Brazilian Public Institutions

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • ALLAN DE MEDEIROS MARTINS
  • DANIEL SABINO AMORIM DE ARAUJO
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • NELIO ALESSANDRO AZEVEDO CACHO
  • PEDRO MESQUITA MOURA
  • Data: Dec 20, 2022


  • Show Abstract
  • The use of artificial intelligence models has become frequent in several areas of knowledge to resolve different problems efficiently. Due to this, many Brazilian Public Institutions have invested in AI solutions to improve and optimize their services. However, these institutions, mainly public safety organizations, use sensitive privacy data in their solutions. Thus, the use of this data is bureaucratic, primarily to respect all General Data Protection Law requirements. Furthermore, each institution explores a limited examples scenario which makes the AI models biased. The data sharing between institutions could provide the creation of general datasets with a better capacity to create more robust models. However, due to the nature of the data, this type of action is, in many cases, unfeasible. Thus, federated learning has gained space in the recent literature to enable the sharing of AI models safely. In this technique, instead of sharing data, only the models already trained are aggregated on a server to provide a new model. With this, it is possible to transfer knowledge from various models to create an improved version of them. Therefore, this work proposes using federated learning to create a safe environment for sharing AI models among Brazilian Public Institutions. In addition, the work proposes the experiment with different techniques present in the literature to identify the best federated algorithm used in this studied scenario.

19
  • SAMUEL LUCAS DE MOURA FERINO
  • Desvendando os Métodos de Ensino Adotados em Cursos DevOps

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • UIRA KULESZA
  • VINICIUS CARDOSO GARCIA
  • Data: Dec 22, 2022


  • Show Abstract
  • DevOps consiste em um conjunto de práticas que ajudam a lidar com conflitos entre as equipes de desenvolvimento e operação e busca garantir liberações de versões de software que sejam rápidas e confiáveis. O entendimento dessas práticas é essencial para a atuação de engenheiros de software na indústria. Neste sentido, a educação DevOps assume a tarefa vital de preparar os novos profissionais, através do ensino dessas práticas utilizando métodos de ensino adequados. O trabalho de pesquisa existente mostra que os métodos de ensino são úteis para os educadores desenvolverem e melhorarem seus cursos de DevOps. No entanto, há um número insuficiente de estudos investigando métodos de ensino na educação em DevOps. Nesta dissertação, realizamos dois estudos empíricos buscando compreender os métodos de ensino utilizados na educação em DevOps. No primeiro estudo, investigamos os métodos de ensino disponíveis na literatura. No segundo estudo, analisamos os métodos de ensino aplicados a partir de entrevistas com educadores do DevOps de cursos DevOps existentes. O objetivo do nosso trabalho é orientar novos educadores de DevOps a alcançar uma melhor experiência de ensino. Como resultado dos estudos, apresentamos um conjunto abrangente de 23 métodos de ensino, incluindo métodos de ensino tradicionais (aulas formais) bem como métodos de ensino menos usuais, tais como, aprendizado baseado em estúdio. Aprendizagem baseada em projetos e aprendizagem colaborativa foram os métodos de ensino mais recorrentes encontrados em ambos os estudos. A maioria destes métodos de ensino requerem uma maior interação entre educadores e alunos. Apresentamos também vinculações entre os métodos de ensino e os desafios. Estabelecemos tais vinculações durante o estudo I com base numa análise de estudos empíricos sobre métodos de ensino, enquanto as vinculações do estudo II vieram de uma análise de um estudo relacionado. Tais vinculações podem ajudar os educadores na seleção dos métodos de ensino do curso, onde o educador pode escolher os métodos que lidam com os desafios do seu contexto de ensino.

Thesis
1
  • GABRIEL ALVES VASILJEVIC MENDES
  • Model, Taxonomy and Methodology for Research Employing Electroencephalography-based Brain-Computer Interface Games

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • LEONARDO CUNHA DE MIRANDA
  • BRUNO MOTTA DE CARVALHO
  • SELAN RODRIGUES DOS SANTOS
  • FABRICIO LIMA BRASIL
  • MARIA CECILIA CALANI BARANAUSKAS
  • Data: Jan 31, 2022


  • Show Abstract
  • The rapid expansion of Brain-Computer Interface (BCI) technology, aligned with the advancements on the fields of Physiological Computing (PC), Human-Computer Interaction (HCI) and Machine Learning (ML), allowed for the recent development of applications outside of clinical environments, such as education, arts and games. Games controlled by electroencephalography (EEG), a specific case of BCI technology, benefit from both the fields of BCI and games, since they can be played by virtually any person regardless of physical condition, can be applied in numerous contexts, and are ludic by nature. Despite these recent advancements, there is still no solid theoretical foundation to aggregate the terminology and methods of these fields, since current models and classification schemes can represent characteristics of either BCI systems or games, but not both. In this sense, this work presents a general model for representing EEG-based BCI games; a taxonomy for classifying primary studies of the field; and a methodology for conducting scientific studies using those games. The proposed model is intended to help researchers describe, compare and develop new EEG-controlled games by instantiating its components using concepts from the fields of BCI and games. The CoDIS taxonomy was constructed based on an expanded version of this model, which considers four aspects of EEG-controlled games: concept, design, implementation and study, each with different dimensions to represent a variety of characteristics of such systems. Based on both the model and the taxonomy, and guided by the principles of empirical research, the PIERSE methodology was developed for the planning, implementation, execution and reporting of scientific experiments that employ EEG-based BCI games.

2
  • THIAGO NASCIMENTO DA SILVA
  • Algebraic Semantics and Calculi for Nelson's logics

  • Advisor : JOAO MARCOS DE ALMEIDA
  • COMMITTEE MEMBERS :
  • FEY LIANG
  • TOMMASO FLAMINIO
  • MANUELA BUSANICHE
  • JOAO MARCOS DE ALMEIDA
  • UMBERTO RIVIECCIO
  • Data: Feb 18, 2022


  • Show Abstract
  • The aim of this thesis is to study a family of logics, comprised of Nelson’s logic S, constructive logic with strong negation N3, quasi-Nelson logic QN and quasi-Nelson implicative logic QNI. This is done in two ways. The first is by means of an axiomatisation via a Hilbert Calculus and the second is by studying some of the properties of the corresponding quasi-variety of algebras. The main contribution of the thesis is to prove that these logics fit within the theory of algebraisable logics. Making use of this result, the following are also proven. Regarding S, we introduced its first semantics, axiomatised by means of a finite Hilbert-style calculus, as well as established a version of the deduction theorem for it. Regarding QN and QNI, we showed that both are algebraisable with respect to the class of quasi-Nelson algebras and quasi-Nelson implication algebras, respectively; we showed that they are non-self-extensional; we showed how to obtain from them, by axiomatic extensions, other well-known logics, such as the {->, ~}-fragment of intuitionistic propositional logic, the {->, ~}-fragment of Nelson’s constructive logic with strong negation and the classical logic; and finally, we made explicit the quaternary term that guarantees that both QN and QNI satisfy the deduction theorem. Regarding N3, we study the role of the Nelson identity ((φ -> (φ -> ψ))∧(~ ψ -> (~ ψ -> φ)) = φ -> ψ) in establishing order-theoretic properties for its algebraic semantics. Moreover, we have studied the ⟨^, v, ~, ¬, 0, 1⟩-subreducts of quasi-Nelson algebras, and by making use of their twist representation, proved that this object-level correspondence can be stated as a categorical equivalence. Lastly, it is worth noting that QN I is the {->, ~}-fragment of QN , so some results concerning QNI may be easily extended to QN.

3
  • RODRIGO REBOUÇAS DE ALMEIDA
  •  

    Business-driven Technical Debt Prioritization

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • CAROLYN SEAMAN
  • CHRISTOPH TREUDE
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • EIJI ADACHI MEDEIROS BARBOSA
  • MARCOS KALINOWSKI
  • UIRA KULESZA
  • Data: Feb 23, 2022


  • Show Abstract
  •  

    Technical debt happens when teams take shortcuts on software development to gain short-term benefits at the cost of making future changes more expensive. Previous results show misalignment between the prioritization done by technical professionals and the prioritization expected by business ones. This thesis presents a business-driven approach to prioritizing technical debt. The research is organized into three phases: (i) exploratory - a survey with practitioners, to identify the business causes of technical debt interviews; (ii) concept verification - where the proposed approach was evaluated on a multi-case study; and (iii) - design and evaluation - where a design science research, with the involvement of three companies, was conducted to develop Tracy, an approach for business-driven technical debt prioritization; followed by a multiple case study on two other companies. So far, we have identified business causes and impacts of technical debt; we designed the approach for business-driven technical debt prioritization; after we developed a tool based on the approach, we finally ran a multiple case study on two companies to evaluate the solution. Results show a set of the business causes behind the creation of technical debt; and also that the business-driven prioritization of technical debt can improve the alignment and communication between the technical and business stakeholders. We also identified a set of business factors that may drive the technical debt prioritization.

4
  • THIAGO VINICIUS VIEIRA BATISTA
  • Generalizations of the Choquet Integral as a Combination method in Ensemble of Classifiers
  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • GRAÇALIZ PEREIRA DIMURO
  • RONEI MARCOS DE MORAES
  • RUI EDUARDO BRASILEIRO PAIVA
  • Data: Mar 4, 2022


  • Show Abstract
  • Ensembles of classifiers is a method in machine learning that consists in a collection of classifiers that process the same information and their output is combined in some manner. The process of classification is done in two main steps: the classification step and the combination step. In the classification step, each classifier processes the information and provides an output, in the combination step, the output of every classifier is combined, providing a single output. Although the combination step is extremely important, most works focus mostly on the classification step. Therefore, in this work, generalizations of the Choquet Integral will be proposed to be used as a combination method in ensembles of classifiers. The main idea is to allow a greater freedom of choice for functions in the integral, opening possibilities for otimization and using functions adequate to the data. Furthermore, a new notion of partial monotonicity is proposed, and consequently an alternative to the notion of pre-aggregation functions. Preliminary results that were obtained by the generalizations of the Choquet integral in the ensemble showed that they were capable of obtaining good results, having a superior performance to known methods in literature such as XGBoost, Bagging, among others. Furthermore, the generalizations that used the proposed aggregation functions had good performance when compared to other classes of functions, such as Copulas and Overlaps. 


5
  • ISLAME FELIPE DA COSTA FERNANDES
  • Hybridizing Metaheuristics for Multi and Many-objective Problems in a Multi-agent Architecture

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MYRIAM REGATTIERI DE BIASE DA SILVA DELGADO
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Jun 15, 2022


  • Show Abstract
  • Hybrid algorithms combine the best features of individual metaheuristics. They have proven to find high-quality solutions for multi-objective optimization problems. Architectures provide generic functionalities and features for implementing new hybrid algorithms to solve arbitrary optimization problems. Architectures based on agent intelligence and multi-agent concepts, such as learning and cooperation, give several benefits for hybridizing metaheuristics. Nevertheless, there is a lack of studies on architectures that fully explore these concepts for multi-objective hybridization. This thesis studies a multi-agent architecture named MO-MAHM, inspired by Particle Swarm Optimization concepts. In the MO-MAHM, particles are intelligent agents that learn from past experiences and move in the search space, looking for high-quality solutions. The main contribution of this work is to study the MO-MAHM potential to hybridize metaheuristics for solving combinatorial optimization problems with two or more objectives. We investigate the benefits of machine learning methods for agents' learning support and propose a novel velocity operator for moving the agents in the search space. The proposed velocity operator uses a path-relinking technique and decomposes the objective space without requiring aggregation functions. Another contribution of this thesis is an extensive survey of existing multi-objective path-relinking techniques. Due to a lack in the literature of effective multi- and many-objective path-relinking techniques, we present a novel decomposition-based one, referred to as MOPR/D. Experiments comprise three differently structured combinatorial optimization problems with up to five objective functions: 0/1 multidimensional knapsack, quadratic assignment, and spanning tree. We compared the MO-MAHM with existing hybrid approaches, such as memetic algorithms and hyper-heuristics. Statistical tests show that the architecture presents competitive results regarding the quality of the approximation sets and solution diversity.

6
  • SUENE CAMPOS DUARTE
  • Reversal Fuzzy Switch Graph

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • REGIVAN HUGO NUNES SANTIAGO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • MANUEL ANTÓNIO GONÇALVES MARTINS
  • FLAULLES BOONE BERGAMASCHI
  • JORGE PETRUCIO VIANA
  • Data: Jun 17, 2022


  • Show Abstract
  • We present a state-based Fuzzy model called Reversal Fuzzy Switch Graphs (RFSG). This model enables the activation or deactivation of edges as well as the updating of fuzzy values from the action of aggregation functions, whenever a transition occurs between the states. The Fuzzy feature of RFSGs allows you to model uncertainties whereas th activation and deactivation of edges allow the simulation of dynamic aspects of access to system states. When more than one aggregation function is used in this process, we have the Reversal Fuzzy Reactive Graphs (RFRG).
    In addition, we propose some operations that are based on aggregation functions (unions, intersections, Cartesian product and extension). We also present the relationship between the RFRGs and the usual Fuzzy graphs together with a notion for simulation and bisimulation. We also introduce the concept of homomorphism between RFSGs and a modal logic for verifying properties of systems that are modeled by such graphs.

7
  • CIRO MORAIS MEDEIROS
  • Improvements on Graph Path Queries: Expression, Evaluation, and Minimum-Weight Satisfiability 

  • Advisor : MARTIN ALEJANDRO MUSICANTE
  • COMMITTEE MEMBERS :
  • CARMEM SATIE HARA
  • CÉDRIC EICHLER
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • FAROUK TOUMANI
  • MARTIN ALEJANDRO MUSICANTE
  • MATHIEU LIEDLOFF
  • MIRIAN HALDFELD FERRARI
  • NICOLAS TRAVERS
  • NORA REYES
  • Data: Aug 30, 2022


  • Show Abstract
  • We deal with three problems related to graphs and context-free languages: (1) we develop an alternative notation for expressing context-free languages; (2) we design, implement and experiment a context-free path query evaluation algorithm; and (3) we formalize the formal-language-constrained graph minimization problem, for which we design solutions for the cases where the formal language is regular or context-free.

8
  • RAFAEL DE MORAIS PINTO
  • A Framework for Multidimensional Assessment of Public Health Interventions


  • Advisor : LYRENE FERNANDES DA SILVA
  • COMMITTEE MEMBERS :
  • LYRENE FERNANDES DA SILVA
  • UIRA KULESZA
  • RICARDO ALEXSANDRO DE MEDEIROS VALENTIM
  • LYANE RAMALHO CORTEZ
  • XIMENA PAMELA CLAUDIA DÍAZ BERMÚDEZ
  • PLACIDO ANTONIO DE SOUZA NETO
  • THAISA GOIS FARIAS DE MOURA SANTOS LIMA
  • WAGNER DE JESUS MARTINS
  • Data: Sep 2, 2022


  • Show Abstract
  • Promoting awareness, increasing knowledge, and encouraging the adoption of healthy attitudes and behaviors are some of the objectives of public health interventions. However, to analyze the scope of an intervention, it is necessary to go beyond the epidemiological data since this set, by itself, may not demonstrate the absolute magnitude of the results. It is necessary to discuss other data sources, variables of interest, and dimensions that the intervention can achieve. Thus, assessing the scope of a public health intervention from a multidimensional perspective through the time series approach can help guide the development of more effective interventions in the public health response. In this context, this thesis presents a framework for the multidimensional evaluation of public health interventions, exploring variables of interest that are possibly impacted by interventions. This framework is supported by a software called Hermes, responsible for processing the data in a complete lifecycle and showing its results in a visual dashboard that allows decision makers to assess the effect over time before and after campaigns and analyze possible correlations between variables of interest. To understand the current state of the art and guide research in this domain, we conducted a systematic literature review that explores the use of information technology approaches to analyze the impact of public health campaigns. This study summarizes variables of interest, campaign data, techniques, and tools used to evaluate public health interventions. We also conducted an analytical study to evaluate a health intervention launched in Brazil named “Sífilis Não!”. This study describes the analyzed data extracted from seven data sources between 2015 and 2019, grouped into four dimensions: campaign, communication, education, and epidemiological surveillance. Hermes processed and transformed the data using a time series approach, following the proposed multidimensional analysis framework. In addition, two other studies were conducted exploring data from the “Syphilis No!” project, using different approaches and variables of interest. The joint analysis of these data allowed a better understanding of the project’s scope and the impacted variables of interest. Finally, we also analyzed epidemiological and communication data on hepatitis in Brazil to carry out a case study using the proposed framework, outside syphilis. The results of our thesis contribute to enabling a more comprehensive assessment of the scope of public health interventions and thus enabling policymakers to re-examine awareness-raising strategies developed to alert people to health care and behavioral changes, as well as better direct the use of resources more effectively.

9
  • ALLAN VILAR DE CARVALHO
  • The Traveling Salesman Problem with Multiple Passengers Optional Bonus Quota and Time

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • ISLAME FELIPE DA COSTA FERNANDES
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • Data: Nov 23, 2022


  • Show Abstract
  • This work presents the Traveling Salesman Problem with Multiple Passengers Optional Bonus Quota and Time. This problem has the objective of maximizing the profit of a traveling salesman who in addition to transporting goods can transport passengers to apportion their travel expenses. Goods and passengers must be transported from their origins to their destinations. The goods transported require loading and unloading time, and must account for a minimum quota defined a priori. The salesman also decides whether or not to transport a goods or a passenger when visiting a locality. This work describes the problem, relates it to other problems, and formalizes it mathematically. A nonlinear mathematical programming model, two heuristic algorithms, and thirteen metaheuristic algorithms are proposed. The heuristics developed followed the meta-heuristics ACO, GRASP and Transgenetic. A linearization of the nonlinear mathematical programming model is also proposed. Two sets of test instances have been created. A computational experiment that compare and validates the models and algorithms proposed is presented.

10
  • ARTHUR EMANOEL CASSIO DA SILVA E SOUZA
  • SAPPARCHI: A scalable platform to execute applications on Computacional Smart City Environments

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • ALUÍZIO FERREIRA DA ROCHA NETO
  • CARLOS ANDRE GUIMARÃES FERRAZ
  • FLAVIA COIMBRA DELICATO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 28, 2022


  • Show Abstract
  • In the Smart Cities environment, applications development and execution face important challenges related to 1) Big Data Concept: The huge amount of processed and stored data with various data sources and data types; 2) Multi-domains: the many involved domains (Economy, Traffic, Health, Security, Agronomy, etc); and 3) Multiples processing methods like Data Flow, Batch Processing, Services, and Microservices. In facing these challenges, many platforms, middlewares, and architectures have been proposed to run applications in the Smart City’s environment. Despite all the progress already made, the vast majority of solutions have not met the functional requirements of Application Development, Application Deployment, and Application Runtime. Some studies point out that in a universe of 97 platforms, only 20.6% met the functional requirements of Application Development, Application Deployment, and Application Runtime. And, when those requirements are related to Scalability (non-functional), this number goes to 0.01%. Due to the lack of solutions that explore these requirements, all these concerns on Smart City’s Application Developing are passed on to the various stakeholders. For example, while Service Providers are concerned with: How to measure, charge, deploy, scale-up or scale-down, and execute to efficiently use the Computing Infrastructure, for developers it is important to know: How to implement, execute, scale application components, where to store their data, and where to deploy (Cloud, Fog, or Edge). In this work, we seek to outline and answer some of these questions. To this, we propose to build an evolutionary model for organizing and executing applications in the context of Smart Cities, the Smart City Application Architectural Model (Sapparchi). Sapparchi is an integrated architectural model for Smart Cities applications that define multi-processing levels (at the moment can support Edge, Fog, and Cloud). In addition, solutions are presented for monitoring, deploying, and scaling applications deployed at Cloud, Fog, and Edge levels. Finally, we present the Sapparchi middleware platform for developing, deploying, and running applications in the smart city environment with a focus on self-scaling and multi-processing computacional levels (From Cloud to Edge)

11
  • THALES AGUIAR DE LIMA
  • An Investigation of Accent Inclusion in Brazilian Portuguese Speech

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • MARJORY CRISTIANY DA COSTA ABREU
  • BRUNO MOTTA DE CARVALHO
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • ALTAIR OLIVO SANTIN
  • MARCOS ANTONIO SIMPLICIO JUNIOR
  • Data: Dec 16, 2022


  • Show Abstract
  • Speech is a very important part of our way to communicate as a species and combined
    with the evolution of instant messaging in voice format as well as automated chatbots,
    its importance has become even greater. While the majority of speech technologies have
    achieved high accuracy, they fail when tested for accents that deviate from the “standard”
    of a language. This becomes more concerning for languages that lack on datasets and
    have scarce literature, like Brazilian Portuguese. In a parallel development, artificial
    intelligence(AI)-based tools are an accepted increasingly present in people’s lives, even
    if not always noticeable. This excluding behaviour combined with the advancement of
    AI in speech systems and the lack of resources, have inspired the three objectives of
    this work. Thus, this thesis proposes to explore news ways for Accent Conversion for
    this language, adapting a light-weight model called SABr+Res, which must convert from
    PaulistatoNordestino. The second is to provide an acoustic analysis of Brazilian Portuguese
    accents, covering a wide area of the national territory, finding and formalising possible
    differences between them. Finally, to collect and release a speech dataset for Brazilian
    Portuguese. With a method that explores the availability of data and information in
    video platforms, the method automatically downloads the videos from TEDx Talks. Those
    short presentations are a source of reliable and clean audio with human and automatically
    generated transcriptions

2021
Dissertations
1
  • RAVELLY OLIVEIRA DOS SANTOS SALES
  • Electric Traveling Salesman with Passengers

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • Data: Jan 25, 2021


  • Show Abstract
  • The logistics research field has observed the growing use of electric vehicles in different branches, including passenger transport. The PCVEP is a logistics problem that mixes elements of the well-known Traveling Salesman Problem (PCV), the Traveling Salesman with Passengers Problem (PCVP) and the Traveling Time Electric Salesman Problem (PCVEJT), in addition to also considering, restrictions for increasing or recharging travel autonomy and the fact that the autonomy is sensitive to the number of passengers loaded in the vehicle, all intrinsic to the problem itself and which further hamper the ability to solve the problem. The accomplishment of this work consisted of the research and the study of Problems of Routing of Electric Vehicles (PRVE) and of problems that address issues of ridesharing. In the first moment, a bibliographic survey of the works that addressed and solved the issues mentioned above was carried out for the proper formulation and description of the PCVEP. Therefore, because it is an unprecedented problem, a bank of Euclidean instances was created for the problem, a random part and an adapted part of TSPLIB. As solving methods for PCVEP, a set of heuristic, naive and hybridized algorithms was developed, for the proper anchoring of the experiments. Yet another set of meta-heuristic algorithms was developed for PCVEP, a randomized greedy procedure, improved through a search in descending variable neighborhood, and a colony algorithm of multi-ant ants, with ants that admit in their solving process , characteristics specific to PCVEP, such as charging stations, distance between locations and passenger loading. PCVEP addresses and solves PRVE, bringing up important sustainability issues, promoting the minimization of greenhouse gas emissions into the atmosphere, reducing traffic in large cities, and also encouraging socialization among people.

2
  • JOÃO GABRIEL QUARESMA DE ALMEIDA
  • Aqüeducte: A Service for Heterogeneous Data Integration in Smart Cities 

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FREDERICO ARAUJO DA SILVA LOPES
  • RENATA GALANTE
  • Data: Jan 28, 2021


  • Show Abstract
  • The evolution and development of new technological solutions for smart cities has grown significantly in recent years. The scenario of smart cities consists of a large amountof data, arranged in a decentralized way, from various devices and applications. This context presents challenges related to data interoperability, including information sharing,data collection from multiple sources (web services, archives, systems in general, etc.),and availability on development platforms focused on applications for intelligent cities.Considering these challenges, this work presents Aqüeducte , a service that provides collection, filtering and conversion of data from various sources to the data exchangeprotocol, NGSI-LD. In this way, it allows the use of data through an importing process tothe Smart Geo Layers middleware (SGeoL) that use of the same protocol. In addition,it provides management of files of various formats, as well as supports relationship ofdata from different domains. All these functionalities are offered through a high level webapplication that aims at providing ease use to the final user to carry out the aforementionedprocesses. This work, in turn, describes the architecture, implementation and methodologyused by Aqüeducte for: (i) extracting data from heterogeneous data sources, (ii) enriching them according to the NGSI-LD data format using the concept of Linked-data togetherwith ontologies, via the LGeoSIM data model, and (iii) publishing them in middlewarebased on the NSGI-LD protocol, which in the case of this work is SGeoL. The use ofAqüeducte is also described in real world intelligent city scenarios

3
  • ELISIO BRENO GARCIA CARDOSO
  • GENERATION OF FAULT-TOLERANT TOPOLOGIES WITH REAL-TIME PACKET DELIVERY CRITERIA

  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • GUSTAVO GIRAO BARRETO DA SILVA
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • SILVIO ROBERTO FERNANDES DE ARAUJO
  • Data: Jan 28, 2021


  • Show Abstract
  • The advances in integration capacity of the chips allowed the emergence of systems with several processing cores, with networks-on-chip becoming the main paradigm in the communication between elements of multi-processed systems. Several proposals have emerged in order to meet mainly restrictions of average latency, area, energy consumption. The projects also cover the network architecture, with the generation of topologies that provide optimized performance for specific applications. This work proposes a heuristic for the generation of fault-tolerant topologies capable of delivering real-time and non-real-time packets via an alternative path within the network in the event of a link failure. Exploration always starts from a regular mesh-2D topology and seeks fault-tolerant topologies that are able to deliver as many packages as possible on time. The development of the implementation is based on the NOC42 simulator, making it capable of working with irregular topologies, real-time packages and a routing algorithm based on a routing table.

4
  • WANDERSON MODESTO DA SILVA
  • Integration of Intelligent Electrical Devices with Legacy Approach in IoT-Based Smart Grid Systems

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • EDUARDO COELHO CERQUEIRA
  • Data: Jan 29, 2021


  • Show Abstract
  • The evolution of the Smart Grid towards the Internet of Things (IoT) is an evolution of the electrical system, it is a natural trend, and it foresees great importance in the scope of mission-critical infrastructures for all countries. The IoT-defined Smart Grid system upgrade will potentially lay the groundwork to achieve future benefits, enabling new opportunities in the Smart Grid market to arise and add value through smart innovations and also to ensure better reliability and energy efficiency, which can lead to a reduction in the production cost and systems maintenance at the same time. The fundamental step that is behind the IoT-defined Smart Grid system upgrade lies in the need to adapt the smart grid infrastructure by the digital layer, to turn it comply with the IoT paradigm. For this to be possible, it is necessary to use emerging technologies such as 5G, cloud computing, and edge computing. In this context, the interoperability between legacy systems and intelligent electronic devices (IED) used by the Smart Grid has become an important issue, since each manufacturer can adopt a different standard, energy distributors already have their systems consolidated and any change in this balance directly affects the system's infrastructure. Thus, the integration of legacy IEDs in a Smart Grid environment based on IoT, was accomplished by using the components of the SG2IoT architecture, presented in this work, which allowed the inclusion of multiple IEDs in a scalable and flexible environment made possible by the SG-Cloud-IoT ecosystem. Also, an application for monitoring IEDs using the FIWARE platform was used in order to validate the proposed solution regarding the display of critical information, the status of IEDs, and other information in real-time. Finally, experiments were carried out to evaluate the performance of the solution when subjected to various levels of stress caused by the increased inclusion of the number of devices using legacy Smart Grid protocols, such as IEC 61850 and DNP3.

5
  • JULLIANA CAROLINE GONÇALVES DE ARAÚJO SILVA MARQUES
  • The impact of feature selection methods on online handwritten signature by using clustering-based analysis

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • MARJORY CRISTIANY DA COSTA ABREU
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Jan 29, 2021


  • Show Abstract
  • Handwritten signature is one of the oldest and most accepted biometric authentication methods for human identity establishment in society. With the popularisation of computers and, consequently, computational biometric authentication systems, the signature was chosen for being one of the biometric traits of an individual that is likely to be relatively unique for every person. However, when dealing with biometric data, including signature data, problems related to high dimensional space, can be generated.  Among other issues, irrelevant, redundant data and noise are the most significant, as they result in a decreased of identification accuracy. Thus, it is necessary to reduce the space by selecting the smallest set of features that contain the most discriminative features, increasing the accuracy of the system. In this way, our proposal in this work is to analyse the impact of feature selection on individuals identification accuracy based on the handwritten online signature. For this, we will use two well-known online signature databases: SVC2004 and xLongSignDB. For the feature selection process, we have applied two filter and one wrapper methods. Then, the resulted datasets are evaluated by classification algorithms and validated with a clustering technique. Besides, we have used a statistical test to corroborate our conclusions. Experiments presented satisfactory results when using a smaller number of features which are more representative, showing that we reached an average accuracy of over 98\% for both datasets which were validated with the clustering methods, which achieved an average accuracy over 80\% (SVC2004) and 70\% (xLongSignDB).

6
  • LUIS TERTULINO DA CUNHA NETO
  • Transgenetic Algorithm for the Geometry and Intensity Problems in IMRT

  • Advisor : SILVIA MARIA DINIZ MONTEIRO MAIA
  • COMMITTEE MEMBERS :
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Feb 8, 2021


  • Show Abstract
  • Intensity Modulated Radiotherapy (IMRT) is a form of treatment of cancerous diseases in which the patient is irradiated with radiation beams, aiming to eliminate tumor cells while sparing healthy organs and tissues as much as possible. In addition, each beam is divided into beamlets that emit a particular dose of radiation. A treatment plan is composed of: (a) a set of beam directions (angles); (b) the amount of radiation emitted by the beamlets of each beam; and (c), a radiation delivery sequence. The elaboration of a plan can be modeled by optimization problems, usually NP-hard, where steps (a), (b) and (c) are called problems of Geometry, Intensity (or Fluence Map) and Realization, respectively. This work addresses the first two. A transgenetic algorithm is proposed for the joint solution of these two problems. It uses an adaptation of the epsilon-constraint method present in the literature to compute the fluence map of a set of beams. In addition, linear and quadratic approximation functions are proposed for a particular type of (non-convex) function present in radiotherapy optimization: the dose-volume function. Two groups of experiments are carried out to ascertain the effectiveness of the algorithm: one with the dose in the tumor as a restriction, and another with it as an objective function. Real cases of liver cancer are used in the experiments. The results for the first group show the effectiveness in the optimization of objective functions and doses below those desired for the tumor. The results for the second group show that the tumor dose as an objective function of the problem is in fact the most appropriate option.

7
  • LUCAS RODRIGUES SILVA
  • CONBAT FRAMEWORK: A SOLUTION FOR TESTING CONTEXT BASED SYSTEMS IMPLEMENTED IN ARDUINO

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • UIRA KULESZA
  • WILKERSON DE LUCENA ANDRADE
  • Data: Feb 26, 2021


  • Show Abstract
  • Embedded systems, especially context-aware systems, whose behaviour is determined
    by information constantly obtained by different kinds of sensors, can be very hard to be
    tested. That happens due to the nature of their input data, which can be hard to
    replicate, and also because of their limited resources. Thus, software testing techniques
    that may work for “common” software can be insufficient for this kind of system. Tools
    created to support the testing activity of embedded systems are often limited to unit
    tests, and avoid having to deal with data received from sensors, which is actually the
    foundation of context-aware systems. To support software testing on context-aware
    systems, this work proposes (i) an approach to collect and document the variation of
    context data captured by sensors over time, (ii) the concept of context driven testing
    and (iii) the development of a tool called Context Based Testing framework (ConBaT
    framework) to help collecting context data and creating context driven tests for Arduino
    systems.

8
  • SÁVIO RENNAN MENÊZES MELO

  • A Policy-Making Approach for Data Offloading in the Fog Computing Context

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • FERNANDO ANTONIO MOTA TRINTA
  • GIBEON SOARES DE AQUINO JUNIOR
  • THAIS VASCONCELOS BATISTA
  • Data: May 24, 2021


  • Show Abstract
  • Currently, the most varied objects are connected to the Internet and, at the same time,
    generating massive amounts of data. Linked to this fact, the internet of things applications
    are increasingly complex and with more responsibilities. Storing, processing, managing, and
    analyzing this amount of data are challenging processes. The execution of these processes
    is commonly performed in external services through cloud computing, however, a paradigm
    called fog computing enables such execution directly at the edge of the network, serving
    as a support for the agile and efficient functioning of the internet of things. However,
    when fog computing does not have enough resources to perform these actions, the data is
    transferred to entities with higher computational capabilities, which is a practice known
    as offloading. In this regard, this research explore the use of policies that guide the process
    of data offloading in the context of fog computing. Therefore, the objective of this work is
    to define and organize strategies to guide the development of policies for data offloading
    in fog computing. For this, the concrete results of the work were: the survey of policies for
    data offloading proposed by the literature; the development of a taxonomy that meets the
    main aspects used in the data offloading process; the development of a guide structure
    that recommends practices for policy making for data offloading and a demonstration of
    the instantiation of the approach, through proof of concept. Finally, this research identifies
    that the use of such strategies has much to contribute to fog-based applications as it
    improves the data offloading process.

9
  • CARLOS DIEGO FRANCO DA ROCHA
  • WoundArch: A Hybrid Architecture System for the Segmentation and Classification of Chronic Wounds

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • AURA CONCI
  • BRUNO MOTTA DE CARVALHO
  • BRUNO SANTANA DA SILVA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • ITAMIR DE MORAIS BARROCA FILHO
  • Data: May 31, 2021


  • Show Abstract
  • Every year, millions of people are affected by chronic wounds around the world. The wound treatment process is costly and requires nursing professionals to develop activities during patient care, among them: the correct identification and classification of wound tissues. In this way, this work proposes to build a hybrid computational system of two configurations to support the treatment of wounds. The first configuration uses a mobile application to perform the capture, segmentation and classification of the images of wounds. The other configuration has a client-server architecture, the images are captured and segmented in the application and sent, through the Internet, to the web server, which is responsible for classifying the tissue of the wounds. For this, some studies were carried out in the literature, namely: review of scientific articles, applications similar to the one proposed in this work, methods for assessing the efficiency of computer systems and wound classification algorithms. Thus, this work uses the method of segmentation and classification of wound images developed by Marques et al. \cite{marques2018ulcer} to build the hybrid computational system. An evaluation questionnaire was developed to be applied to nursing specialists at Hospital Universitário Onofre Lopes in order to assess the technical quality of segmentation and classification of wounds. Thus, by means of a \ textit {survey}, we sought to answer the following research question: are the execution time and energy expenditures acceptable by nurses in both configurations of the system?

10
  • ARTHUR COSTA GORGÔNIO
  • A Data Stream Framework for Semi-supervised Classification in Non-Stationary Environments

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • MARJORY CRISTIANY DA COSTA ABREU
  • JOAO CARLOS XAVIER JUNIOR
  • KARLIANE MEDEIROS OVIDIO VALE
  • ARAKEN DE MEDEIROS SANTOS
  • Data: Jun 25, 2021


  • Show Abstract
  • Data stream applications receive a large volume of data quickly, and they need to process
    them sequentially. In these applications, the data may change during the use of the model;
    in addition, the number of instances whose label is known may not be sufficient to generate
    an effective model. Semi-supervised learning can be used to suppress the difficulty
    of the small number of instances labelled. Also, an ensemble of classifiers can assist in
    detecting the concept drift. So, in this work, we proposed a framework to perform the
    semi-supervised classification in tasks in a data stream context, using an approach based
    on an ensemble of classifiers. In order to evaluate the effectiveness of this proposal, empirical
    tests are carried out with eleven databases using two different batches sizes, nine
    supervised approaches (three simple classifiers and six ensembles), using the metrics accuracy,
    precision, recall and F-Score. When assessing the number of instances processed, the
    supervised approaches achieved practically stable performance, while the proposal showed
    an improvement of 8.28% and 3.81% using 5% and 10% of labelled instances, respectively.
    In general, the results show that increasing the number of instances processed in batches
    implies, in most cases, improving the results of the semi-supervised approach.

11
  • BRUNO DOS SANTOS FERNANDES DA SILVA
  • An investigative analysis of Gender Bias in JudicialData using Supervised and Unsupervised MachineLearning Techniques

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • LAURA EMMANUELLA ALVES DOS SANTOS SANTANA DE OLIVEIRA
  • MARJORY CRISTIANY DA COSTA ABREU
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Jul 5, 2021


  • Show Abstract
  • Brazilian Courts have been working in virtualisation of judicial processes since this century's rise, leading to a revolution in
    relations, services and labour force. A huge volume of data has been produced and computational techniques have been an intimate ally to keeping business processes under control and delivering services as juridical clients expect. However,although there is a misunderstanding that automation solutions are always ’intelligent’, which in most cases, it is not true, there has never been any discussion about the use of intelligent solutions for this end as well as any issues related with automatic predicting and decision making using historical data in context. One of the problems that has already come to light is the bias in judicial datasets around the world. Thus, this work will focus on evaluating,applying and understanding resources based on fine and parameter tuning, with the end of better using machine learning techniques when working on judicial systems, and, therefore,raising the discussion related to secondary issues. We have used a real dataset of judicial sentences (Além da Pena), applying supervised and unsupervised learning models and our results point to the accuratedetection of gender bias.



12
  • JOSÉ GAMELEIRA DO RÊGO NETO
  •  

    Understanding the Relationship between Integração Contínua e Test Coverage: An Empirical Study

     
     
     
    Investigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory StudyInvestigating the Use of Static Analysis in the Context of Smart Cities Applications: An Exploratory Study 
  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • ELDER JOSÉ REIOLI CIRILO
  • FREDERICO ARAUJO DA SILVA LOPES
  • NELIO ALESSANDRO AZEVEDO CACHO
  • UIRA KULESZA
  • Data: Aug 13, 2021


  • Show Abstract
  •  

    The evolution of software development methodologies has enabled an increase in the delivery of new features and improvements. One of the best practices for increasing the delivery speed is continuous integration (CI). CI is a practice that motivates automating and integrating source code more often during software development. The adoption of CI helps developers to find integration issues faster. It is believed that the practice of CI helps the software to have fewer bugs throughout its lifecycle. One of the ways to find bugs is by performing software tests, and one of the most used metrics to ensure quality in software testing is test coverage. Therefore, it is believed that CI adoption and test coverage have a strong relationship. Previous studies have provided preliminary evidence for this relationship between CI and tests, however, most of them do not demonstrate it empirically. This dissertation proposes an empirical study that aims to identify the relationship between CI adoption and test coverage through the analysis of several open-source projects. We quantify coverage trend comparisons over time between projects that adopt (or do not ) CI. Our results suggest that CI projects have high test coverage rates and stability, while NOCI projects have low coverage rates and less potential for growth.


    Keywords: continuous integration, test coverage, empirical study

     

    The evolution of software and hardware systems has enabled the application of such technologies

    to assist in solving day-to-day problems in the context of big cities. Over the last

    years, there is an increasing interest companies, researchers and government in the development

    of large-scale systems and applications for the domain of smart cities. Large-scale

    software systems often present critical challenges for their development, maintenance and

    evolution. Smart city applications typically involve dealing with many challenges, such

    as scalability, security, communication and heterogeneity. One way to identify problems

    in the source code of large-scale systems is through the usage of static analysis tools. In

    this context, this work presents an exploratory study that aims to evaluate the usefulness

    of modern static analysis tools in the context of smart city applications. The study

    analyzes 3 real smart cities systems through the analysis of rule violations reported by the

    SonarQube tool. In addition, the work also relates such violations to existing challenges

    of the smart city domain reported by the literature. The results show that the challenges

    of security, data management and maintenance of the platform are the ones that exhibit

    more problems related to static analysis.

    The evolution of software and hardware systems has enabled the application of such technologies

    to assist in solving day-to-day problems in the context of big cities. Over the last

    years, there is an increasing interest companies, researchers and government in the development

    of large-scale systems and applications for the domain of smart cities. Large-scale

    software systems often present critical challenges for their development, maintenance and

    evolution. Smart city applications typically involve dealing with many challenges, such

    as scalability, security, communication and heterogeneity. One way to identify problems

    in the source code of large-scale systems is through the usage of static analysis tools. In

    this context, this work presents an exploratory study that aims to evaluate the usefulness

    of modern static analysis tools in the context of smart city applications. The study

    analyzes 3 real smart cities systems through the analysis of rule violations reported by the

    SonarQube tool. In addition, the work also relates such violations to existing challenges

    of the smart city domain reported by the literature. The results show that the challenges

    of security, data management and maintenance of the platform are the ones that exhibit

    more problems related to static analysis.

     
     
13
  • MATHEWS PHILLIPP SANTOS DE LIMA
  • Evaluation of MADM-based Mobility Strategies in a5G networks quality-oriented scenario

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • DANIEL CORUJO
  • VICENTE ANGELO DE SOUSA JUNIOR
  • Data: Oct 29, 2021


  • Show Abstract
  • The growth in the number of User Equipment (UE) in the need for the development of new mobile network technologies is needed in order to meet the demand for traffic required. With this, the concept of Mobile Networks of Fifth Generation (5G) emerges with innumerable paradigms and approaches in order to advance on how the current internet service needs. However, mobility management in mobile networks is faced with the need to be resolved, one of which is the efficiency of handover decision mechanisms. Numerous numbers have been documented in the literature in order to obtain the best mobility decisions. Among the various mobility availability algorithms, according to solutions based on Multi-Attribute (Multiple Attribute Decision Making - MADM), they are considered the most robust due to their efficiency with regard to multimedia traffic on mobile networks. Most evaluations of MADM-based methods documented in the literature do not have an analysis of data on the subjective aspects related to user perception/satisfaction with regard to mobility models, which is a fundamental condition for assessing the Quality of Experience (Quality of Experience - QoE) in multimedia applications applied in this scenario. Therefore, this work offers a wide review of MADM methods, the quality-oriented mobility decision process.

Thesis
1
  • KLEBER TAVARES FERNANDES
  • Creative Game: developing computational thinking, reading and writing skills by creating games
  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • LYRENE FERNANDES DA SILVA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • FRANCISCO MILTON MENDES NETO
  • PATRICIA CABRAL DE AZEVEDO RESTELLI TEDESCO
  • THIAGO REIS DA SILVA
  • Data: Jan 22, 2021


  • Show Abstract
  • The initiatives that promote the development of computational thinking in basic education
    are still insufficient. Historically, the results of assessments in this same segment have
    shown deficiencies in the learning of mathematics and the Portuguese language. There are
    researches that present technological solutions that prioritize solving math problems.
    However, when it comes to textual production (Portuguese language), few are presented.
    One of the strategies that can contribute to the development of computational thinking and
    the ability to produce texts is the use of digital games. These are increasingly part of our
    daily lives and are also considered as teaching and learning tools. However, its production
    and documentation is a very complex task that requires programming skills and knowledge
    from various areas. This has hampered the development of games in the classroom. An
    unplugged approach to creating games based on natural language, in which the
    fundamentals of computing are learned in a playful way and without the use of computers,
    shows itself as an alternative for adopting game-based learning. In this context, this work
    presents an approach that proposes the specification and creation of games in an unplugged
    way from texts produced by students, favoring the development of computational thinking,
    reading and writing skills in the classroom. In addition, it may favor students' interest in the
    area of computing by motivating them to enter a higher education course and / or a career
    in that area. It uses the hypothetical deductive method, being characterized as applied in
    nature. It is also classified as explanatory, since it proposes an approach to specification and
    creation of digital games examining its applicability, effectiveness and main benefits. The
    results from exploratory studies show that the proposed approach is applicable to its
    context and point to an improvement in the development of computational thinking skills,
    as well as motivating textual production, promoting students' reading and writing skills.

2
  • DÊNIS FREIRE LOPES NUNES
  • IPNoSys III: Software-defined Networks paradigm applied to the control process of a multiprocessor architecture.

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • GUSTAVO GIRAO BARRETO DA SILVA
  • ALISSON VASCONCELOS DE BRITO
  • CESAR ALBENES ZEFERINO
  • SILVIO ROBERTO FERNANDES DE ARAUJO
  • Data: Jan 26, 2021


  • Show Abstract
  • The use of Networks-on-Chip (NoCs) in the communication infrastructure of multiprocessor systems (MPSoCs) has become a standard due to its scalability and support for parallel communications. These architectures allow the execution of applications formed by different tasks that communicate with each other, and the support for this communication has a fundamental role in the system's performance. IPNoSys (Integrated Processing NoC System) is an unconventional architecture, with its own execution model, developed to exploit this NoC communication structure as a high-performance processing system. In the scenario of conventional computer networks, there was a convergence towards the use of the Software-Defined Network (SDN) paradigm, and a central component controls the network, which has an overview of the network and is programmable to change the network configuration to adapt to the specifics of the application or the needs of the programmer. Some works propose the use of the SDN paradigm in NoCs in order to create more flexible architectures. Thus, SDNoCs has a simpler communication infrastructure but is connected to a programmable controller that manages the network's functioning. This work aims to present an architecture based on the IPNoSys execution model but using SDN concepts to provide network control. IPNoSys III is an NoC with a 2D mesh topology, which contains a communication unit and four processing cores on each node, with memory access, that executes packets in the IPNoSys format. An SDN controller, connected to all nodes, has an overview and manages the network to execute the routing algorithm and map tasks according to performance objectives. As a proof of concept, we developed a programming and simulation environment for this architecture in SystemC, and the evaluations performed show the operation and benefits obtained through the use of an SDN controller.

3
  • ALBA SANDYRA BEZERRA LOPES
  • FRiDa: A Predictive tool for Fast DSE of Processors combined with Reconfigurable Accelerators 

  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ANTONIO CARLOS SCHNEIDER BECK FILHO
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • SILVIO ROBERTO FERNANDES DE ARAUJO
  • Data: Feb 5, 2021


  • Show Abstract
  • Each year the demand of embedded applications for computational resources increases. To meet this demand, the embedded system designs have made use of the combination of diversified components, resulting in heterogeneous platforms that aims to balance the processing power with the energy consumption. However, a key question in the design of these systems is which components to combine to meet the expected performance at the cost of additional area and energy. To perform a vast design space exploration allows to estimate the cost of these platforms before the manufacturing phase. However, the number of possibilities for solutions to be evaluated grows exponentially with the increasing diversity of components that can be integrated into a heterogeneous embedded system. Evaluate the cost of one of these solutions through hardware synthesis is an extremely costly task. And even the use of high-level synthesis tools as alternative does not allow to synthesize all the solution possibilities and meet the textittime-to-market. In this work, one propose the use of prediction models based on machine learning algorithms to construct a tool for design space exploration of heterogeneous systems composed of general purpose processors and reconfigurable hardware accelerators. This tool aims to speed up the design exploration in the early stages of the design process and achieve high accuracy rates in predicting the cost of solutions. Although there are solutions in the literature that make use of the same prediction models approach, in general, these solutions address the exploration of microarchitectural parameters of only one of the components (either processors or accelerators). This work proposes the variation of the parameters of both components and also proposes the use of ensemble learning to increase the accuracy of the predictive modeling. Preliminary results show that the built prediction models are able to achieve a prediction accuracy rate of up to 98% and reduce the time for exploring the design space by 104x.

4
  • HULIANE MEDEIROS DA SILVA
  • A Methodology for Defining the Number of Clusters and the Set of Initial Centers for Partitions Algorithms

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • GRAÇALIZ PEREIRA DIMURO
  • RONILDO PINHEIRO DE ARAUJO MOURA
  • Data: Feb 5, 2021


  • Show Abstract
  • Data clustering consists of grouping similar objects according to some characteristic. In literature, there are several clustering algorithms, among which stands out the Fuzzy C-Means (FCM), one of the most discussed algorithms, being used in different applications. Although it is a simple and easy to manipulate clustering method, the FCM requires as its initial parameter the number of clusters. Usually, this information is unknown, beforehand and this becomes a relevant problem in the data cluster analysis process. Moreover, the design of the FCM algorithm strongly depends on the selection of the initial centers of the clusters. In general, the selection of the initial set of centers is random, which may compromise the performance of the FCM and, consequently, of the cluster analysis process. In this context, this work proposes a new methodology to determine the number of clusters and the set of initial centers of the partial algorithms, using the FCM algorithm and some of its variants as a case study. The idea is to use a subset of the original data to define the number of clusters and determine the set of initial centers through a method based on mean type functions. With this new methodology, we intend to reduce the side effects of the clusters definition phase, possibly speeding up the processing time and decreasing the computational cost. To evaluate the proposed methodology, different cluster validation
    indices will be used to evaluate the quality of the clusters obtained by the FCM algorithms and some of its variants, when applied to different databases.

5
  • ALUÍZIO FERREIRA DA ROCHA NETO
  • Edge-distributed Stream Processing for Video Analytics in Smart City Applications

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • NELIO ALESSANDRO AZEVEDO CACHO
  • FLAVIA COIMBRA DELICATO
  • JOSÉ NEUMAN DE SOUZA
  • PAULO DE FIGUEIREDO PIRES
  • Data: Mar 31, 2021


  • Show Abstract
  • Emerging IoT applications based on distributed sensors and machine intelligence, especially in the context of smart cities, present many challenges for network and processing infrastructure. For example, a single system with a few dozen monitoring cameras is sufficient to saturate the city’s backbone. Such a system generates massive data streamsfor event-based applications that require rapid processing for immediate actions. Finding a missing person using facial recognition technology is one of those applications that require immediate action at the location where that person is since this location is perishable information. An encouraging plan to support the computational demand for widely geographically distributed systems is to integrate edge computing with machine intelligence tointerpret massive data near the sensor and reduce end-to-end latency in event processing. However, due to the limited capacity and heterogeneity of the edge devices, distributed processing is not trivial, especially when applications have different QoS requirements.This work presents an edge-distributed system framework that supports stream processingfor video analytics. Our approach encompasses an architecture, methods, and algorithms to (i) divide the heavy processing of large-scale video streams into various machine learning tasks; (ii) implementing these tasks as a data processing workflow on edge devices equipped with hardware accelerators for neural networks; (iii) allocate a set of nodes with sufficient processing capacity to perform the workflow, minimizing the operational costrelated to latency and energy and maximizing availability. We also propose to reuse nodes by performing tasks shared by various applications, such as facial recognition, thus optimizing the nodes’ throughput. We also present simulations to show that the distribution of processing across multiple edge nodes reduces latency and energy consumption and further improves availability compared to processing in the cloud.

6
  • JULIANA DE ARAÚJO OLIVEIRA
  • Engineering Efficient Exception Handling for Android Applications

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • EIJI ADACHI MEDEIROS BARBOSA
  • FERNANDO JOSÉ CASTOR DE LIMA FILHO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • ROBERTA DE SOUZA COELHO
  • WINDSON VIANA DE CARVALHO
  • Data: May 31, 2021


  • Show Abstract
  • The popularity of the Android platform can be attributed to their ability to run apps that leverage the many capabilities of mobile devices. Android applications are mostly written in Java, however, they are very different from standard Java applications, with different abstractions, multiple entry points, and also have a different form of communication between components. These differences in the structure of Android applications have had negative effects on the user experience in terms of low robustness. In terms of robustness, the exception handling mechanism for the Android platform has two main problems: (1) the Terminate ALL" approach and (2) a lack of a holistic view on exceptional behavior. Exception handling is strongly related to program robustness. In addition to robustness, energy consumption and performance are other non-functional requirements that need to be taken into account during development. These three requirements can directly affect the quality of the user experience and the quality of the functioning of the applications. In this context this work proposes a general methodology to efficient engineering of Android applications and an EHM called DroidEH to support the methodology and to improve the robustness of Android applications. Studies have been carried out to understand the impact of exception handling on the robustness and energy consumption of Android applications. The evaluation of the methodology showed that it is satisfactory in achieving the objective of allowing the developer to make decisions taking into account these non-functional requirements and to determine through the trade-o between these requirements, dierent operation modes that can be implemented in the application using the DroidEH. Furthermore, it was observed that the use of DroidEH in applications can enhance its robustness.

7
  • ERICA ESTEVES CUNHA DE MIRANDA
  • A Framework Based on Requirements Engineering to Support Regulatory and Legal Compliance in Computer Systems

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • APUENA VIEIRA GOMES
  • JOSUÉ VITOR DE MEDEIROS JÚNIOR
  • FERNANDA MARIA RIBEIRO DE ALENCAR
  • MARILIA ARANHA FREIRE
  • Data: Jul 29, 2021


  • Show Abstract

  • The regulatory and legal universe permeates everything and everyone. Therefore, computer systems need to be from their conception, evolution, or even their maintenance in regulatory and legal compliance with the laws, rules, regulations, bylaws, statutes, standards, among other legal media (named, in this research, from regulatory or legal sources - RLS) that rule your domain, your application context. The objective of this research was to offer a alternative to the Computing professional (e.g., requirements analysts/engineers and project managers) ways to verify and maintain legal and regulatory compliance in their projects, where regulatory or legal sources no longer cover only individuals or legal entities, but also digital people and those RLS can not be only national. Identifying, defining and prioritizing these RLS have become problems for these Computing professionals in different contexts, especially in agile ecosystems of computing systems development. Thus, the following methodological strategies were adopted: systematic literature review; face-to-face and remote interviews; questionnaires; case studies; action research; and organizational ethnography. As a result of this research, it was formalized and evaluated with representatives of the target user audience a framework aimed for assisting Computing professionals, in the deployment and implementation process, and verification (audit) of regulatory or legal compliance in computer systems in agile ecosystems, despite being easily adaptable to any other methodology. Thereby, in addition to creating facilities throughout the work cycle with regulatory or legal requirements, enable computer systems in regulatory and legal compliance with RLS.


8
  • CARINE AZEVEDO DANTAS
  • An Analysis of Integration of Dynamic Selection Techniques in the Construction of an Ensemble System
  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • MARJORY CRISTIANY DA COSTA ABREU
  • DANIEL SABINO AMORIM DE ARAUJO
  • ARAKEN DE MEDEIROS SANTOS
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • Data: Jul 30, 2021


  • Show Abstract
  • The use of dynamic selection techniques, for attributes or members of an ensemble,
    has appeared in several works in the literature as a mechanism to increase the accuracy of rating
    ensembles. Individually, each of these techniques has already shown the benefits of using it. The
    objective of this work is to improve the efficiency of the classifier ensembles through the use of
     dynamic selection techniques for the definition of structure of these systems. With that, it will be
    possible to explore the use of these two techniques integrated in the classification of an instance,
    making each instance be classified using its own subset of attributes and classifiers. When used in
    an integrated manner, due to the use of the two dynamic processes, it is believed that the complete
    system has a long execution time. Aiming to overcome this disadvantage in its use, where the
    complete dynamic system will be used only in certain instances. Thus, some instances would be
    classified using all the dynamic system, while the other instances would be classified using only a single classifier. In other words, some instances may not require a level high complexity of the classification system. For these instances, a classifier will be used. In this way, the dynamic ensemble will only be used in instances considered difficult to classify. Initial results showed that the integration of these two dynamic techniques obtained promising results in terms of accuracy. Finally, these results were not significantly affected by the addition of the decision criterion, which generated a very significant reduction in the total processing time of the system.


9
  • HUDSON GEOVANE DE MEDEIROS
  • A multiobjective approach to the leaf sequencing problem on intensity modulated radiation therapy

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • ANNA GISELLE CAMARA DANTAS RIBEIRO RODRIGUES
  • MATHEUS DA SILVA MENEZES
  • THALITA MONTEIRO OBAL
  • Data: Aug 10, 2021


  • Show Abstract
  • Algorithms are an essential part of the radiation therapy planning, and under aoptimization point of view, can be divided in three subproblems. Defining the anglesby which radiation will be shot and prescribe a fluence map for each angle are two ofthem. This work investigates the third problem, called realization problem. It consistson defining a sequence of configurations for a device (called multileaf collimator) whichcorrectly delivers the prescribe doses to the patient. A common model for this problem isthe decomposition of a matrix in a weighted sum of (0-1)-matrices, called segments, whoserows only have consecutive ones. Each segment represents a setup of the collimator. Otherconstraints can be also considered. The realization problem has three objectives. The firstone is to minimize the sum of the weights associated to the segments. The second is tominimize the number of segments. The third minimizes the movement of the leaves. Thiswork investigates and present algorithms for two variants of the problem: unconstrainedand constrained. A new greedy and randomized algorithm – GRA – was developed firstlyfor the unconstrained variant and then extended for the constrained variant. Its results wascompared to other algorithms from the literature, under mono and multiobjective pointsof view. On the unconstrained problem, experiments show that GRA outperforms theother algorithms by all measured indicators. On the constrained problem, GRA presentedcompetitive results, specially on the second objective, in which it presented the best results.

10
  • JORGE PEREIRA DA SILVA
  • SGEOL: A Platform for Developing Smart Cities Applications 


  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FABIO KON
  • MARKUS ENDLER
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 25, 2021


  • Show Abstract
  • In the last few decades, the number of people living in cities has grown exponentially. This scenario imposes several challenges to the management of the city, since the services offered to the population (transportation, security, health, electricity supply, etc.) need to be scaled up quickly to support an increasing number of inhabitants. The realization of the concept of smart cities emerged as a promising solution to face the various challenges resulting from urban growth. Smart city environments are characterized by the presence of a myriad of applications that aim to facilitate city management, contributing to the provision of more efficient services and, consequently, improving the quality of life of citizens. However, developing such applications is not a trivial task. In many cases, developers need to meet several complex requirements to be implemented. In addition, to allow contextualization and correlation of information produced in the city, they need to be enriched with geographical information that represents the urban space. In this sense, smart city platforms play a fundamental role in achieving this environment. Such platforms provide high-level services that can be easily reused by developers to leverage application development. In this perspective, this work presents Smart Geo Layers (SGEOL), a scalable platform for developing applications for smart cities. In addition to allowing the integration of urban data with geographic information, SGEOL offers facilities for: i) management of context data, ii) integration of heterogeneous data, iii) semantic support; iv) data analysis and visualization; v) support for data security and privacy, etc. This work also presents experiences of real use of SGEOL in different scenarios, as well as results of computational experiments that evaluate its performance and scalability.

11
  • MÁRIO ANDRADE VIEIRA DE MELO NETO
  • A Framework Proposal for Multi-Layer Fault Tolerance in IoT Systems

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • GIBEON SOARES DE AQUINO JUNIOR
  • NELIO ALESSANDRO AZEVEDO CACHO
  • ROSSANA MARIA DE CASTRO ANDRADE
  • VINICIUS CARDOSO GARCIA
  • Data: Dec 3, 2021


  • Show Abstract
  • Fault tolerance in IoT systems is challenging to overcome due to its complexity, dy-
    namicity, and heterogeneity. IoT systems are typically designed and constructed in layers.

    Every layer has its requirements and fault tolerance strategies. However, errors in one layer
    can propagate and cause effects on others. Thus, it is impractical to consider a centralized
    fault tolerance approach for an entire system. Consequently, it is vital to consider multiple
    layers in order to enable collaboration and information exchange when addressing fault
    tolerance. The purpose of this study is to propose a multi-layer fault tolerance approach,
    granting interconnection among IoT system layers, allowing information exchange and
    collaboration in order to attain the property of dependability. Therefore, it is defined an
    event-driven framework called FaTEMa (Fault Tolerance Event Manager) that creates a

    dedicated fault-related communication channel in order to propagate events across the le-
    vels of the system. The implemented framework assist with error detection and continued

    service. Additionally, it offers extension points to support heterogeneous communication

    protocols and evolve new capabilities. The empirical evaluation results show that intro-
    ducing FaTEMa provided improvements to the error detection and error resolution time,

    consequently improving system availability. In addition, the use of Fatema provided a
    reliability improvement and a reduction in the number of failures produced.

12
  • THADEU RIBEIRO BENÍCIO MILFONT
  • Ordered n-dimensional fuzzy graphs

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • REGIVAN HUGO NUNES SANTIAGO
  • IVAN MEZZOMO
  • MATHEUS DA SILVA MENEZES
  • RENATA HAX SANDER REISER
  • RUI EDUARDO BRASILEIRO PAIVA
  • Data: Dec 3, 2021


  • Show Abstract
  • A fuzzy graph is a fuzzy relation between the elements of a set, they are ideal for modeling uncertain data about these sets. The fuzzy graphs appear frequently in the literature, among them, stands out the fuzzy graph of Rosenfeld, based on fuzzy sets of
    Zadeh, and its extensions, such as: interval-valued fuzzy graphs, bi-polar fuzzy graphs and m-polar fuzzy graphs. The applications of these concepts are vast: cluster analysis, pattern classification, database theory, social science, neural networks, decision analysis, among others. As well as fuzzy graphs, studies on admissible orders and their extensions are frequent. Originally, admissible orders were introduced in the context of interval-valued fuzzy sets by H. Bustince et al. and since then they have been widely investigated. Recently, this notion has been studied in other types of fuzzy sets, such as interval-valued intuitionistic fuzzy sets, hesitant fuzzy sets, multidimensional fuzzy sets and n-dimensional fuzzy sets. In this context, this work proposes to extend the fuzzy graph of Rosenfeld to interval-valued n-dimensional fuzzy graphs, based on n-dimensional fuzzy sets, as well as, for the admissible interval-valued n-dimensional fuzzy graphs, that we equip with an admissible ordered semi-vector space. We present some methods to generate admissible orders in the n-dimensional fuzzy set and the concept of n-dimensional aggregation functions with respect to an admissible order. We extend the concept of ordered semi-vector space in a semi-field of non-negative real numbers to an arbitrary weak semi-field. We define in a set of admissible interval n-dimensional fuzzy graphs the concept of ordered semi-vector space, whit this, we introduced in this set the concept of admissible interval n-dimensional fuzzy graphs aggregation function. Several properties of these concepts were investigated, in addition to presenting some applications.

2020
Dissertations
1
  • FRANCIMARIA RAYANNE DOS SANTOS NASCIMENTO
  • An experimental investigation of letter identification and scribe predictability in medieval manuscripts

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • MARJORY CRISTIANY DA COSTA ABREU
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • Data: Jan 16, 2020


  • Show Abstract
  • Though handwriting might seem archaic today in comparison with typed communication, it is a long-established human activity that has survived into the 21st century. Accordingly, research interest into handwritten documents, both historical and modern, is significant. The way we write has changed significantly over the past centuries. For example, the texts of the Middle Ages were often written and copied by anonymous scribes. The writing of each scribe, known as his or her 'scribal hand' is unique, and can be differentiated using a variety of consciously and unconsciously produced features. Distinguishing between these different scribal hands is a central focus of the humanities research field known as 'palaeography'. This process may be supported and/or enhanced using digital techniques, and thus digital writer identification from historical handwritten documents has also flourished. The automation of the process of recognising individual characters within each scribal hand has also posed an interesting challenge. A number of issues make these digital processes difficult in relation to medieval handwritten documents. These include the degradation of the paper and soiling of the manuscript page, which can make automatic processes difficult. Thus, in this paper, we propose an investigation in both perspectives, character recognition and writer identification, in medieval manuscripts. Our experiments show interesting results, with good accuracy rates.

2
  • STEFANO MOMO LOSS
  • Orthus: A Blockchain Platform for Smart Cities

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • WILSON DE SOUZA MELO JUNIOR
  • DANILO CURVELO DE SOUZA
  • FREDERICO ARAUJO DA SILVA LOPES
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • Data: Feb 5, 2020


  • Show Abstract
  • Currently, blockchain has been widely used to store decentralised and secure transactions involving cryptocurrency (for instance, Bitcoin and Ethereum solutions). On the other hand, smart city applications are concerned about how data and services can be safely stored and shared. In this regard, this research investigates the use of blockchain and pinpoints a set of essential requirements to meet the needs of blockchain for the context of smart cities. Base on that, a platform named Orthus was proposed to support the use of blockchain in smart city initiatives focused on scalability. This master’s thesis demonstrated a case study about how to use the proposed platform in the context of the Natal Smart City Initiative, in Brazil, to handle land registration. Moreover, it also compares this platform with other implementations that use blockchain in different domains. Finally, this research confirms that use of blockchain technology has much to contribute to smart city solutions once it enables the creation of solutions in distributed networks being able to meet the demand of the entire population.

3
  • YURI KELVIN NASCIMENTO DA SILVA
  • Traveling Salesman Problem with Prize Collecting, Passengers and Penalties for Delays

     

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Feb 7, 2020


  • Show Abstract
  • This work introduces a new Traveling Salesman Problem variant called Traveling Salesman Problem with Prize Collecting, Passengers and Penalties for Delays. In this problem, the salesman has, along the graph, potential passengers who need to move between localities. Each boarded passenger will contribute a portion to the division of the travel costs between all the occupants of the vehicle in a certain stretch. In addition, each vertex has an aggregate prize value that may or may not be collected by the salesman during his journey. The prizes have a time for the collection and an estimated minimum time to be collected without a reduction in its value, characterizing the penalty. Thus, the goal is to find a route that maximizes the amount of collected prizes minus the travel costs divided with passengers and any penalties imposed on the prizes. As an instrument of formalization and validation of the problem, a Mathematical Programming model is proposed and solved through a mathematical solver for test instances generated for the problem in question. A coupling analysis of the instances is reported through experiments with ad hoc heuristic methods and exact methods that consider particular cases of the model. Moreover, three evolutionary metaheuristics are proposed aiming the efficiency in obtaining quality solutions

4
  • MARCOS ALEXANDRE DE MELO MEDEIROS
  • Improving Bug Localization by Mining Crash Reports: an Empirical Study

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • EIJI ADACHI MEDEIROS BARBOSA
  • NELIO ALESSANDRO AZEVEDO CACHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • UIRA KULESZA
  • Data: Feb 19, 2020


  • Show Abstract
  • The information available in crash reports has been used to understand the root cause of bugs and improve the overall quality of systems. Nonetheless, crash reports often lead to a huge amount of information, being necessary to apply techniques that aim to consolidate the crash report data into groups, according to a set of well-defined criteria. In this dissertation, we contribute with customization of rules that automatically find and group correlated crash reports (according to their stack traces) in the context of large scale web-based systems. We select and adapt some approaches described in the literature about crash report grouping and suspicious file ranking of crashing the system. Next, we design and implement a software tool to identify and rank buggy files using stack traces from crash reports. We use our tool and approach to identify and rank buggy files—that is, files that are most likely to contribute to a crash and thus need a fix.

    We evaluate our approach comparing two sets of classes and methods: the classes (methods) that developers changed to fix a bug and the suspected buggy classes (methods) that are present in the stack traces of the correlated crash reports. Our study provides new pieces of evidence of the potential use of crash report groups to correctly indicate buggy classes and methods present in stack traces. For instance, we successfully identify a buggy class with recall varying from 61.4% to 77.3% and precision ranging from 41.4% to 55.5%, considering the top 1, top 3, top 5, and top 10 suspicious buggy files identified and ranked by our approach. The main implication of our approach is that developers can locate and fix the root cause of a crash report considering a few classes or methods, instead of having to review thousands of assets.

5
  • DOUGLAS ARTHUR DE ABREU ROLIM
  • Application Development and Data Visualization Dashboards for Smart City Platforms

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • NELIO ALESSANDRO AZEVEDO CACHO
  • ROSSANA MARIA DE CASTRO ANDRADE
  • THAIS VASCONCELOS BATISTA
  • Data: Mar 4, 2020


  • Show Abstract
  • The massive use of interconnected devices through unique addressing schemes capable of interacting with each other and their neighbors to achieve common goals characterizes the IoT paradigm. The application of this IoT paradigm in public affairs management, as a way to solve the current problems of cities such as resource scarcity, pollution, health concerns, congestion, among others, makes the so-called Smart Cities. However, it is necessary to address several major challenges related to the need to integrate multiple devices that use different types of protocols and do not follow a common pattern. To address this problem, middleware platforms have emerged as a promising solution to facilitate application development, providing interoperability to enable the integration of devices, people, systems and data, and a host of additional services needed in the context of smart cities. In particular, smart city platforms should consider the existence of geographic information on urban space and other aspects related to the context to which they are embedded. However, most middleware platforms for this scenario: (i) do not have high-level interfaces that facilitate smart city application development; and (ii) provide an interface for organizing data display for users, given the large amount and variety of data that is processed and stored on smart city platforms. This paper: (i) proposes an architecture for the smart city platform interface that considers georeferenced data; (ii) implements such architecture in the context of Smart Geo Layers (SGeoL) middleware, including specific dashboard interfaces for application developers and users interested in applications built using the platform. (SGeoL) is a platform that combines georeferenced data, solves interoperability and heterogeneity problems, and is currently applied in the context of the city of Natal.

6
  • LUCAS CRISTIANO CALIXTO DANTAS
  • A Virtual Laboratory for Developing and Experimenting Internet of Things Applications

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FREDERICO ARAUJO DA SILVA LOPES
  • KIEV SANTOS DA GAMA
  • Data: Mar 20, 2020


  • Show Abstract
  • The development of Internet of Things (IoT) applications facing important issues such as the inherent device heterogeneity. in terms of capabilities, computing power, network protocols, and energy requirements. To address this challenge, IoT middleware platforms have been proposed to abstract away the specificities of such devices, promoting interoperability among them, and easing application development. One of these proposals is FIWARE, an open, generic platform developed in the European Community to leverage the development of Future Internet applications. Given a set of FIWARE components required for a specific application under development, their deployment and configuration can be made either manually or using a container-based approach. However, setting up an environment composed by the main FIWARE components might sometimes not be a trivial process. This work proposes FIWARE-Lab@RNP, a Web virtual laboratory for prototyping and experimenting applications based on the FIWARE platform. The main concern of FIWARE-Lab@RNP is enabling the use of FIWARE resources through the Internet in a transparent way, thus relieving users from the need of deploying and operating a FIWARE instance on their development or owned environment. The virtual laboratory provides functionalities for easily creating, configuring, and managing instances of FIWARE components, devices, context entities, and services while attempting to minimize the learning curve regarding these tasks.

7
  • GUILHERME DUTRA DINIZ DE FREITAS
  • Investigating the Relationship between Continuous Integration and Code Quality Metrics: An Empirical Study

     

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • DANIEL ALENCAR DA COSTA
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Mar 26, 2020


  • Show Abstract
  • Software quality is an essential attribute for the success of every software project. It is a significant element to the competitiveness of the software industry. Meanwhile, continuous integration is known as a software development practice that can contribute to improving the software quality. In this research, we conduct a series of studies that investigate the relationship between continuous integration and software quality code metrics that have not been explored before. For this purpose, we looked at whether continuous integration adoption and maturity sharing are related to better code quality metrics. As a result, we found that there is no statistical evidence that CI adoption and maturity are related to code quality metrics. We found that test coverage is the continuous integration core practice that most impacts object-oriented software metrics. On the other hand, integrating builds frequently is not related to any of the studied metrics. Additionally, we found that projects with faster builds tend to have better system structure between classes and packages but they also have higher coupling. We also observed that projects with fast build fixes tend to have a better hierarchy and class structuring. Regarding test coverage, projects with higher test coverage tend to have a lower intrinsic operation complexity but a worse operation structuring comparing with projects with lower test coverage.

8
  • DANILO RODRIGO CAVALCANTE BANDEIRA
  • A Study About the Impact of Combining Handwriting and Keyboard Keystroke Dynamics on Gender and Emotional State prediction

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Apr 3, 2020


  • Show Abstract
  • The use of soft biometrics as an auxiliary tool on user identification is already well known.
    It is not, however, the only use possible for biometric data, as such data can be adequate to
    get low level information from the user that are not only related to his identity. Gender,
    hand-orientation and emotional state are some examples, which it can be called softbiometrics.
    It is very common to find work using physiologic modalities for soft-biometric
    prediction, but the behavioural data is often neglected. Two possible behavioural modalities
    that are not often found in the literature are keystroke dynamics and handwriting
    signature, which can be seen used alone to predict the users gender, but not in any kind of
    combination scenario. In order to fill this space, this study aims to investigate whether the
    combination of those two different biometric modalities can impact the gender prediction
    accuracy, and how this combination should be done.

9
  • PEDRO VICTOR BORGES CALDAS DA SILVA
  • Leveraging the Development of FIWARE-based Internet of Things Applications with IoTVar

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • ROSSANA MARIA DE CASTRO ANDRADE
  • THAIS VASCONCELOS BATISTA
  • Data: Jul 15, 2020


  • Show Abstract
  • The rising popularity of the Internet of Things (IoT) has led to a plethora of highly heterogeneous, geographically-dispersed devices. In recent years, IoT platforms and middleware have been integrated into the IoT ecosystem for tackling such a heterogeneity, promoting interoperability, and making application development easier. IoTVar and FIWARE are examples of solutions that provide services to accomplish these goals. However, developing an application atop FIWARE requires a high-level of knowledge of the platform, besides being a time consuming, error prone task. On the other hand, IoTVar provides a high abstraction level to manage interactions between IoT applications and underlying IoT platforms, thus enabling developers to easily discover devices and transparently update context data at low development cost in terms of lines of code. This work presents the integration between the IoTVar middleware and FIWARE platforms, providing application developers with the possibility to declare FIWARE IoT variables at the client side through IoTVar. Therefore, they become able to automatically use mapped sensors whose values are transparently updated with sensor observations. The integration between IoTVar and FIWARE was evaluated through a development effort assessment comparing used lines of code to declare and manage IoT variables, as well as experiments to measure the overhead caused by IoTVar in terms of CPU, memory and battery.

10
  • RENATO MESQUITA SOARES
  • Use of Goal-Oriented Product Backlog in Scrum Projects to Support Product Owner Decision-Making




  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • ISABEL DILLMANN NUNES
  • FERNANDA MARIA RIBEIRO DE ALENCAR
  • Data: Jul 27, 2020


  • Show Abstract

  • In the Scrum Framework, the Product Owner (PO) takes the central role within the development process, being responsible for communicating between the customer and the developers. In this intermediation, he manages the Product Backlog, which maintains a list of items to be developed, corresponding to the customer's needs. In this sense, the academy has explored the challenges of the PO, mainly in the planning activities where, in this context, the decision making is seen as his most important task. However, the lack of structured information that can support their choices, makes them, many times, make wrong decisions or omit this responsibility. In Goal-Oriented Requirements Engineering, the requirements are described from the stakeholders organizational goals and, according to the literature, their definition can bring several benefits in terms of information organization capacity. Most Scrum projects use user stories to specify requirements and, although they contain the definition of the goal, she is not evidented in the development process. That said, this work aims to provide a presentation of the organizational information, inherent to the desired product or service, in a provision that justifies and guides the decision making of the PO. To this end, an artifact, the Goals Driven Product Backlog, was proposed, which seeks to highlight the goals and their relationships with user stories. The evaluative study carried out found evidence that the artifact provides more structured information to the PO and, consequently, contributes to his decision making.

     


11
  • JOSÉ LUCAS SANTOS RIBEIRO
  • Microservice Based Architecture for Data Classification

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • CARLOS EDUARDO DA SILVA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FREDERICO ARAUJO DA SILVA LOPES
  • NELIO ALESSANDRO AZEVEDO CACHO
  • Data: Jul 29, 2020


  • Show Abstract
  • Smart solutions for data classification data that make use of Deep Learning are in a moment of ascension. The data analysis area is attracting more and more developers and researchers, but the solutions developed need to be modularized into well-defined components in order to be able to parallelize some stages and obtain a good performance in the execution stage. From this motivation, this work presents a generic architecture for data classification, named Machine Learning in Microservices Architecture (MLMA), that can be reproduced in a production environment. In addition, the use of the architecture is presented in a project that makes multi-label classification of images to recommend tourist attractions and validates the use of serverless to serve models of Machine Learning.

12
  • THALES AGUIAR DE LIMA
  • Investigating fuzzy methods for multilinguals peaker identification

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • ALTAIR OLIVO SANTIN
  • MARJORY CRISTIANY DA COSTA ABREU
  • MONICA MAGALHAES PEREIRA
  • Data: Aug 27, 2020


  • Show Abstract
  • Speech is a crucial ability for humans to interact and communicate.
    Speech-based technologies are becoming more popular with speech interfaces,
    real-time translation, and budget healthcare diagnosis.  Thus, this work aims
    to explore an important but under-investigated topic on the field: multilingual
    speech recognition.  We employed three languages: English, Brazilian
    Portuguese, and Mandarin. To the best of our knowledge, those three languages
    were not compared yet.  The objectives are to explore Brazilian Portuguese in
    comparison with the other two more well-investigated languages, by verifying
    speaker recognition robustness in multilingual environments, and further
    investigate fuzzy methods. We have performed an analysis for text-independent
    speaker identification on closed-set using log-Energy, 13-MFCCs, Deltas, and
    Double Deltas with four classifiers.  The closed-set text-independent speaker
    identification results indicated that this problem presents some robustness on
    multilingual environments, since adding a second language, it degrades the
    accuracy by 5.45\%, and 5.32\% for a three language dataset using an SVM
    classifier.

13
  • FELLIPE MATHEUS COSTA BARBOSA
  • Accurate Chronic Wound Area Measurement using Structure from Motion

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • ANNE MAGALY DE PAULA CANUTO
  • LUIZ MARCOS GARCIA GONCALVES
  • RAFAEL BESERRA GOMES
  • VERONICA TEICHRIEB
  • Data: Oct 23, 2020


  • Show Abstract
  • Chronic wounds are ulcers that have a difficult or almost interrupted healing process,
    leading to an increased risk of health complications, such as amputations and changes. The
    need for quantitative areas is of great importance in clinical trials, pathological analysis of
    wounds and daily patient care. Manual and 2D manuals cannot solve the problems caused
    by the curvatures of the human body and different camera angles. This work proposes
    the use of a non-invasive methodology to perform 3D reconstruction of the human body
    surface to measure wound areas, which combines a combined image, Structure from Motion
    (SfM) with different descriptors, SIFT, SURF, ORB and BRIEF and mesh reconstruction
    to obtain a reliable representation of the skin surface. The results show that accurate
    measurements of 3D surface areas can be obtained from images acquired with a smartphone
    using the proposed methodology, with average errors of 1.7% for SIFT, 3.6% for SURF,
    6.4% for ORB and 20.8% for BRIEF, using a configuration of 10 images, while the average
    error for 2D occurrences was 32.7%, clearly pointing to the superiority of the 3D method.

14
  • LARYSSE SAVANNA IZIDIO DA SILVA
  • A Semantic Query Component for Smart Cities

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • FREDERICO ARAUJO DA SILVA LOPES
  • NELIO ALESSANDRO AZEVEDO CACHO
  • RENATA GALANTE
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 5, 2020


  • Show Abstract
  • Smart cities are composed of several interconnected systems, designed to promote better management of urban and natural resources in cities, thus contributing to improving the quality of life of citizens. Data is of great importance for smart cities, as they significantly contribute to the strategic decision-making process for urban space. However, such a scenario is typically characterized by the high heterogeneity of data sources making the search for significant information more complex. To deal with these characteristics, ontologies have been used in conjunction with Linked Data to semantically represent information, infer new information from existing data and effectively integrate connected information from different sources. This scenario requires a data management strategy that includes efficient mechanisms to support information filtering and knowledge discovery. In this context, this work proposes a search component to semantic data based on the representation of georeferenced information in smart cities through ontologies and linked data. The proposed solution was applied to geo-referenced educational data from a city to infer new non-explicit information from existing data and relationships.

15
  • DOUGLAS BRAZ MACIEL
  • Management and Orchestration of Elastic Network Slices in NECOS LSDC-Defined Domains

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • CHRISTIAN RODOLFO ESTEVE ROTHENBERG
  • MARCO VIEIRA
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 30, 2020


  • Show Abstract
  • The Novel Enablers for Cloud Slicing (NECOS) project is fostered by the 4th Horizon 2020 Collaborative Call between Brazil and Europe (EUB-01-2017: Cloud Computing}. The NECOS project address existing cloud computing and networking limitations to support the heterogeneous demands of new services and verticals. The solution proposed by NECOS is based on a new concept called Lightweight Slice Defined Cloud (LSDC), which considers lightweight tools capable for Management and Orchestration (MANO) of resources that are combined and aggregated to provide end-to-end cloud- and network-level slices (called Cloud-Network Slicing). This Masters research proposes a set of building blocks that extends the NECOS architecture with MANO of network-slice parts composing active or to be activated, cloud-network slice instances in domains defined by the NECOS platform so that provisioning end-to-end resource-guaranteed connectivity and with high-level isolation by exploring the Network Function Virtualization (NFV) concept. In addition, the proposed building blocks follow the Network Softwarization paradigm for enabling automatic resource control at the running time to ensure Quality of Service (QoS) and network-slice resiliency. The proposed solution will be evaluated in a lab-premised testbed under the Future Internet Services and Applications Research Group (REGINA-Lab) Laboratory, whereby the entire Internet NECOS platform participates, along with the building blocks proposed in this Master dissertation. A preliminary evaluation was performed under experiments that consider a real environment defined by NECOS cloud-network slices, suggesting that this approach presents itself as a viable solution.

16
  • IASLAN DO NASCIMENTO PAULO DA SILVA
  • A Microservice Architecture for Processing Relevant Images in Digital Crime Evidences

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • DANIEL SABINO AMORIM DE ARAUJO
  • FRANCISCO DANTAS DE MEDEIROS NETO
  • GIBEON SOARES DE AQUINO JUNIOR
  • Data: Dec 21, 2020


  • Show Abstract
  • Digital forensics is a branch of computer science that uses computational techniques to analyze criminal evidence with greater speed and accuracy. In the context of the Brazilian justice system, during a criminal investigation, forensic specialists extract, decode, and analyze the evidence collected to allow the prosecutor to make legal demands for a prosecution. These experts have a very short time to analyze to find criminal evidence can take a long time. To solve this problem, this paper proposes ARTEMIS (A micRoservice archiTecturE for imagesin criMe evIdenceS or Microservice Architecture for images in criminal evidence) an architecture for classifying large amounts of image files present in evidence using open source software. The image classification module contains some pre-trained classifiers, considering the need of foren-ses analysts from the MPRN (Rio Grande do Norte Public Ministry). Models were built to identify specific types of objects with for example: firearms, ammunition, Brazilian ID cards, text documents, cell phone screen captures enudez. The results obtained show that the system obtained good precision in most cases. This is extremely important in the context of this research, where false positives should be avoided in order to save analysts' work time. In addition, the proposed architecture was able to accelerate the process of evidence analysis.

Thesis
1
  • ROBERCY ALVES DA SILVA
  • Automatic Recommendation of Classifier Ensemble structures using Meta-learning

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • DANIEL SABINO AMORIM DE ARAUJO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Feb 7, 2020


  • Show Abstract
  • Today we are constantly concerned with classifying things, people, and making decisions, which when we encounter problems with a high degree of complexity, we tend to seek opinions from others, usually from people who have some knowledge or even, as far as possible. possible, be experts in the field of the problem in question, so as to effectively assist us in our decision-making process. In analogy to classification structures, we have a committee of people and or specialists (classifiers) that makes decisions, and based on these answers, a final decision is made (aggregator). Thus, we can say that a committee of classifiers is formed by a set of classifiers (specialists), organized in parallel, that receive input information (pattern or instance), and make an individual decision. Based on these decisions, the aggregator chooses the final single decision of the committee. An important issue in designing classifier committees is the definition of their structure, more specifically, the number and type of classifiers, and the method of aggregation for the highest possible performance. Generally, an exhaustive testing and evaluation process is required to define this structure, and trying to assist with this line of research, this paper proposes two new approaches to automatic recommendation systems of the classifier committee structure, using meta-learning to recommend three of these parameters: the classifier, the number of classifiers, and the aggregator.

2
  • GUSTAVO SIZÍLIO NERY
  • Understanding the Relationship between Continuous Integration and its Effects on Software Quality Outcomes

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • DANIEL ALENCAR DA COSTA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • GUSTAVO HENRIQUE LIMA PINTO
  • RODRIGO BONIFACIO DE ALMEIDA
  • UIRA KULESZA
  • Data: Feb 27, 2020


  • Show Abstract
  • Continuous Integration (CI) is the practice of automating and improving the frequency of code integration (e.g., daily builds). CI is often considered one of the key elements involved to support agile software teams. It helps to reduce the risks in software development by automatically building and testing a project codebase, which allows the team to fix broken builds immediately. The adoption of CI can help development teams to assess the quality of software systems. The potential benefits of adopting CI have brought the attention of researchers to study its advantages empirically. Previous research has studied the impact of adopting CI on diverse aspects of software development. Despite the valuable advancements, there are still many assumptions in the community that remains empirically unexplored.

    Our work empirically investigates the software quality outcomes and their relationship with the adoption of CI. This thesis provides a 
    systematic literature mapping that presents a broad knowledge of how practitioners and researchers recognize the CI practice to affect software product-related aspects. Additionally, we improve some assumptions by performing two empirical studies that aim to answer the following open questions: (i) Does the adoption of CI share a relationship with the evolution of test code? (ii) The adherence to CI best practices is related to the degree of code quality? Finally, we present a pioneering study that goes beyond the correlation tests to investigate the estimated causal effect of CI adoption and its impact on automated tests. Thereby, we apply a causal inference using directed acyclic graphs and probabilistic methods to determine the causal effect of CI in automated tests. Our results suggest that, despite the CI adoption trade-offs, it is likely to be associated with
    improvements in software quality. Additionally, it employs a 
    considerable positive causal effect on the volume of automated tests.

3
  • LETTIERY D' LAMARE PORTELA PROCOPIO
  • Autonomous Drones Routing: An Algorithmic Study

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • GILBERTO FARIAS DE SOUSA FILHO
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • Data: Feb 28, 2020


  • Show Abstract
  • This work formulates the Assicron’s version of the Close-Enough Vehicle Routing Problem, used for aerial reconnaissance route planning. We formulate the problem with the second-order programming model and apply heuristic optimization techniques based on a geometric property of the problem to solve it. We present the results of extensive computational experiments with adapted instances of the literature, the tests show that our method produces high quality solutions quickly when compared to the solver.

4
  • EMMANUELLY MONTEIRO SILVA DE SOUSA LIMA
  • Gradual Complex Numbers, local order and local aggregations

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • GRAÇALIZ PEREIRA DIMURO
  • REGIVAN HUGO NUNES SANTIAGO
  • RONEI MARCOS DE MORAES
  • Data: Apr 17, 2020


  • Show Abstract
  •  Aggregations are functions that have the ability to combine multiple objects into a single object of the same nature. Minimum, maximum, weighted average and arithmetic mean, are examples of aggregations frequently used in everyday life which have several possibilities for applications. However, when working with aggregations, such as those mentioned above, the objects in question are always real numbers. There are almost no studies in the literature that portray these aggregations when objects are complex numbers. This is due to the fact that to introduce some aggregations, the objects involved need to be provided with a total order relation. The Graduated Complex Numbers (NCG), proposed by the author, was recently applied in the performance evaluation of classification algorithms. The method required the comparison  of complex graduated numbers to achieve that the notion of local order is proposed and consequently the concept of local aggregation is developed. Two applications of such approach are provided.

5
  • LIDIANE OLIVEIRA DOS SANTOS
  • An Architectural Style Based on the ISO/IEC 30141 Standard for the Internet of Things Systems  

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • ELISA YUMI NAKAGAWA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • FLAVIO OQUENDO
  • JAIR CAVALCANTI LEITE
  • THAIS VASCONCELOS BATISTA
  • Data: May 20, 2020


  • Show Abstract
  • The Internet of Things (IoT) has been contributing to a new technological revolution, promoting a significant social impact. The basic idea of IoT is to enable connectivity, interaction, and integration of uniquely addressable intelligent objects that collaborate with each other to achieve common goals. Although IoT is a promising paradigm for the integration of communication devices and technologies, it is necessary to review the traditional methods of software development considering the particularities required by IoT systems. Given the fundamental role of software architecture in the development of intensive software systems, the challenges related to the development of IoT systems must be considered since the architectural level. A software architecture allows stakeholders to reason about project decisions prior to implementation, define constraints, analyze quality attributes, and be better oriented in terms of system maintenance and evolution. In the context of software architecture, architectural styles have a key role since they specify the architectural elements commonly used by a particular class of systems, along with a set of constraints on how these elements are to be used. Therefore, an architectural style provides a starting point for a coherent modeling of the software architecture, allowing the reuse of elements and a set of previously defined and validated architectural decisions, thus facilitating the architecture modeling process. The literature has much information about IoT and architectural styles, but there is a gap in their integration. The advantages offered by the use of architectural styles can benefit the architectural specification of IoT systems, but there is still no specific architectural style for this type of system in the literature. In the context of software architecture for IoT systems, the ISO/IEC 30141 standard proposes a reference model and a reference architecture for IoT systems, and represents an international consensus on software architecture for IoT. However, such a standard does not define an architectural style. Aiming to fill this gap, the main goal of this work is proposing an architectural style that offers guidelines for modeling the software architecture of IoT systems, in accordance with the ISO/IEC 30141 standard. The style is specified using the SysADL language, an Architectural Description Language (ADL) focused on the modeling of intensive software systems. This work also presents evaluations of the proposed style, performed through: (i) an evaluation of expressiveness of the style using the framework proposed by PATIG (2004), (ii) a usability evaluation of the style using the Cognitive Dimensions of Notation (CDN) framework (BLACKWELL; GREEN, 2003) and (iii) an experimental evaluation using two  controlled experiments to evaluate the effects provided by the use of style.

6
  • GUSTAVO DE ARAUJO SABRY
  • The Traveling Car Renter with Passengers

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • PAULO HENRIQUE ASCONAVIETA DA SILVA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Jun 12, 2020


  • Show Abstract
  • This work presents a new variant of the Traveling Car Renter Problem not yet described in the literature, denominated the Traveling Car Renter with Passengers. This problem provides a set of cities, a set of vehicles and a set of passengers. The salesman's tour can be done using dierent vehicles, i.e., the problem encompasses the process of vehicles' rental and delivery. In the proposed model, the variant of the Traveling Car Renters Problem is merged with Ridesharing aspects. That is, in cities there may be passengers interested in traveling to a certain destination and willing to share costs with the salesman while they are aboard on the vehicle. The objective of the problem is to determine, in a graph, the lowest Hamiltonian cycle considering the vehicles' exchanges and  he shipments along the tour. The problem is made up of several interlinked decisions: the sequence of visited cities, the order of used cars, the cities where the cars must be rented and/or delivered and the passengers' boarding schema. A general analysis of the problem is described to justify its complexity. Two formulations of mixed integer programming are proposed. These formulations are linearized using two dierent techniques, resulting in four linear
    models. These models are implemented in two solvers and validated based on instances of the problem, which in turn are based on instances of the Traveling Car Renter Problem. In addition, two naive heuristics and a metaheuristic to solve the problem. Comparative computational experiments and performance tests are performed on a set of 54 instances. The results obtained are compared and the conclusions are reported.

7
  • HADLEY MAGNO DA COSTA SIQUEIRA
  • Proposal of an high performance architecture for real-time systems.

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • GUSTAVO GIRAO BARRETO DA SILVA
  • CESAR ALBENES ZEFERINO
  • IVAN SARAIVA SILVA
  • Data: Jul 31, 2020


  • Show Abstract
  • Precision-Timed Machines (PRET) are architectures intended for use in real-time and cyber-physical cyber systems. The main feature of these architectures is that they provide predictability and repeatability for real-time tasks, thus facilitating development, analysis, and testing of these systems. The state of the art, at the time of this writing, consists of processors based on the PRET concept. These processors explore thread-level parallelism by interleaving threads at a fine-grained level, i.e. at each clock cycle. This strategy provides good performance when there is parallelism at the thread level, but induces a low performance in the absence of this parallelism. In addition, the switching of threads to each clock cycle leads to high latency. This high latency can make it impossible performing tasks that require low latency. The present work contributes for the state of the art in two ways: first by presenting a proposal for a reconfigurable coarse-grain reconfigurable array based on the PRET concept. The proposed array is coupled to a PRET processor, providing support for accelerating important parts of an application. The array was designed in such a way that when coupled to the processor do not make the processor lose its original temporal properties. The second contribution of this thesis is the proposal and implementation of a multicore architecture. Each core is composed of a processor coupled to the proposedarray. Thus, this work seeks to present a high-performance architecture facing embedded real-time systems that have a high demand for performance such as avionics, for example. Results show that the proposed architecture is capable of providing acceleration of more than 10 times for some types of applications. In terms of area, synthesis results for FPGA show that each core occupies less than half of a processor running out of order. In addition, it has an area similar to other arrays used in low-power embedded systems.

8
  • JOÃO BATISTA DE SOUZA NETO
  • Mutation Test for Big Data programs

  • Advisor : MARTIN ALEJANDRO MUSICANTE
  • COMMITTEE MEMBERS :
  • ANAMARIA MARTINS MOREIRA
  • GENOVEVA VARGAS-SOLAR
  • GIBEON SOARES DE AQUINO JUNIOR
  • MARTIN ALEJANDRO MUSICANTE
  • PLACIDO ANTONIO DE SOUZA NETO
  • SILVIA REGINA VERGÍLIO
  • UMBERTO SOUZA DA COSTA
  • Data: Jul 31, 2020


  • Show Abstract
  • The growth in the volume of data generated, its continuous and large-scale production, and its heterogeneity led to the development of the concept of Big Data. The collection, storage and, especially, processing of this large volume of data requires important computational resources and adapted execution environments. Different parallel and distributed processing systems are used for Big Data processing. Some systems adopt a control flow model, such as the Hadoop system that applies the MapReduce model, and others adopt a data flow model, such as the Apache Spark. The reliability of large-scale data processing programs becomes important due to the large amount of computational resources required for their execution. Therefore, it is important to test these programs before running them in production in an expensive distributed computing infrastructure. The testing of Big Data processing programs has gained interest in the last years, but the area still has few works that address the functional testing of this type of program, and most of them only address the testing of MapReduce programs. This thesis aims to reduce the gap in the area by proposing a mutation testing approach for programs that follow a data flow model. Mutation testing is a testing technique that relies on simulating faults by modifying a program to create faulty versions called mutants. The generation of mutants is carried by mutation operators that are able to simulate specific faults in the program. Mutants are used in the test design and evaluation process in order to have a test set capable of identifying the faults simulated by the mutants. In order to apply the mutation testing process to Big Data processing programs, it is important to be aware of the types of faults that can be found in this context to design mutation operators that can simulate them. Based on this, we conducted a study to characterize faults and problems that can appear in Spark programs. This study resulted in two taxonomies. The first taxonomy groups and characterizes non-functional problems that affect the execution performance of Spark programs. The second taxonomy focuses on functional faults that affect the behavior of Spark programs. Based on the functional faults taxonomy, we designed a set of mutation operators for programs that follow a data flow model. These operators simulate faults in the program through changes in its data flow and operations. The mutation operators were formalized with a model we propose to represent data processing programs based on data flow. To support the application of our mutation operators, we developed the tool TRANSMUT-Spark that automates the main steps of the mutation testing process in Spark programs. We conducted experiments to evaluate the mutation operators and tool in terms of costs and effectiveness. The results of these experiments showed the feasibility of applying the mutation testing process in Spark programs and their contribution to the testing process in order to develop more reliable programs.

9
  • BRUNO DE CASTRO HONORATO SILVA
  • Traveling Salesman Problem with Quota, Multiple Passengers, Incomplete Transportation and bonus with time penalty.

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • PAULO HENRIQUE ASCONAVIETA DA SILVA
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Sep 18, 2020


  • Show Abstract
  •  

    The Quota Travelling Salesman Problem with Passengers, Incomplete Ride, and Collection Time is a new version of the Quota Travelling Salesman Problem. In this problem, the salesman uses a flexible ridesharing system to minimize travel costs while visiting some vertices to satisfy a pre-established quota. We consider operational constraints regarding vehicle capacity, travel time, passenger limitations, and penalties for rides that do not meet passenger requirements. We present a mathematical formulation, and exact and heuristics approaches to solve this problem.

10
  • SANDINO BARROS JARDIM
  • Proactive Autoscaling Towards Assertive Elasticity of Virtual Network Functions in Service Chains

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • HAROLD IVAN ANGULO BUSTOS
  • MARÍLIA PASCOAL CURADO
  • Data: Oct 23, 2020


  • Show Abstract
  • The virtualization of network functions is a technology that proposes to decouple network functions, traditionally allocated on specialized hardware, so that making them available as software elements executing at general-purpose servers premises. Such flexibility allows offering network services running over cloud infrastructures and facilitates enforcing network policies based on the chaining of different functions, through which a targeting traffic must be subjected. The variation in services demand will require the resource management attribute of elasticity to tackle performance goals, adjusting the computational resources of the functions to suit both the new projected demand  and operating costs so as to avoid provisioning beyond the need. Traditionally, reactive threshold-based approaches afford elasticity function, to the cost of exponentially increasing response times as resources run out. Recent work suggest proactive elasticity approaches harnessing the combination of machine learning methods that allow anticipating decisions and adapting resources to the projected demand, as much as possible. Such adequacy is crucial for the success of a proactive elasticity solution, in the perspective to enable assertive scaling decisions to respond with agility and precision to variations in demand, as well as to contribute with the balance of cost and performance objectives. This doctoral thesis presents ARENA, a proactive elasticity mechanism for autoscaling virtualized network functions driven by demand prediction based on machine learning to maximize the assertiveness of horizontal and vertical dimensioning decisions.

11
  • LEANDRO DE ALMEIDA MELO
  • It goes beyond the challenge! Understanding motivations to participate and collaborate in game jams

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • UIRA KULESZA
  • CLEIDSON RONALD BOTELHO DE SOUZA
  • KIEV SANTOS DA GAMA
  • Data: Oct 26, 2020


  • Show Abstract
  • Game jams have been attracting an increasingly diverse attendance, with thousands of professionals, students and enthusiasts coming together to build prototypes of games every year. However, little is known about what attracts people with such different demographic profiles to voluntarily participate in such events. The same can be said about the way participants collaborate during the event. Objectives: In this sense, this dissertation aims to investigate the reason why people willingly take part in such events, i.e., what acts as their motivations and priorities for participating in game jams. In addition, this work is also intended to verify how the motivations and demographic profiles of the participants are related to the way in which they seek and offer help throughout the event. Method: A multi-method study, using quantitative and qualitative analysis techniques, was conducted to understand the mentioned aspects. In this process, data was collected from more than 3,500 people, across more than 100 countries, who had participated in three editions of an annual and global-scale game jam. Results: Among the results, this dissertation presents an instrument and a conceptual model of motivation which resulted from the data analysis, the latter being composed of five motivational dimensions. Based on these aspects, it was possible to investigate the relative influence that the participants' occupation profile has on the participants' motivations. It was possible to identify, for instance, that students and hobbyists are the most influenced by technical motivations, i.e., motivations related to the practice and acquisition of technical knowledge. Indie developers are more attracted to business connections than other groups. Personal motivation related to ideation is the great motivation of all groups, with no significant difference between them in the averages of this specific motivation. Furthermore, it was found that professional and indie developers are the ones who provide help most often, while students constitute the group with the highest degree of intensity in interacting with mentors. However, the frequency with which participants receive help from mentors decreases with increasing experience in game development and the number of prior participations in game jams. Conclusion: Based on such results, a set of organizational implications is available to assist organizers in holding more attractive events and identifying practices that can make collaboration even more present and effective in such events. Finally, implications for design that can be derived from the results of this dissertation are also discussed.

12
  • CÍCERO ALVES DA SILVA
  • A Fog Computing-Based Software Architecture for Patient-Centered Management of Medical Records

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • GIBEON SOARES DE AQUINO JUNIOR
  • AUGUSTO JOSE VENANCIO NETO
  • THAIS VASCONCELOS BATISTA
  • ANDRÉ GUSTAVO DUARTE DE ALMEIDA
  • FERNANDO ANTONIO MOTA TRINTA
  • Data: Nov 10, 2020


  • Show Abstract
  • The aging of the world's population and the growth in the number of people with ch-
    ronic diseases have increased expenses with medical care. Thus, the use of technological

    solutions, including Internet of Things-based solutions, has been widely adopted in the
    medical eld to improve the patients' health. In this context, approaches based on Cloud

    Computing have been used to store and process the information generated in these soluti-
    ons. However, using Cloud can create delays that are intolerable for medical applications.

    Thus, the Fog Computing paradigm emerged as an alternative to overcome this problem,
    bringing computation and storage closer to the data sources. However, managing medical
    data stored in Fog is still a challenge. Moreover, characteristics of privacy, condentiality,
    and interoperability need to be considered in approaches that aim to explore this problem.
    So, this work denes a Fog Computing-based software architecture designed to provide

    patient-centered management of medical records. This architecture uses Blockchain tech-
    nology to provide the necessary privacy features. This thesis also describes a case study

    that analyzed the requirements of privacy, condentiality, and interoperability in a Home
    Care scenario. Finally, the performance behavior related to access to data managed in the
    proposed architecture was analyzed in the mentioned scenario.

13
  • BARTIRA PARAGUACU FALCAO DANTAS ROCHA
  • A Semantic Data Model for Smart Cities

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • THAIS VASCONCELOS BATISTA
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • FREDERICO ARAUJO DA SILVA LOPES
  • BERNADETTE FARIAS LÓSCIO
  • ROSSANA MARIA DE CASTRO ANDRADE
  • Data: Nov 27, 2020


  • Show Abstract
  • Smart cities involve a myriad of interconnected systems designed to promote better management of urban and natural resources in cities, thus contributing to improving citizens' quality of life. The heterogeneity of domains, systems, data, and relationships between them requires defining a data model that can express information in a  flexible and extensible way, and promote interoperability between systems and applications. In addition, smart city systems can benefit from georeferenced information to enable more effective actions in a real-world urban space. In order to address the challenges related to data heterogeneity, considering georeferenced territory information, this paper presents LGeoSIM, a semantic information model for smart cities as a means of promoting interoperability and enabling automated information thinking. LGeoSIM is based on Semantic Web technologies, especially ontologies, RDF and Linked Data, which enable the definition of linked semantic information, and queries about such information. Its specification was implemented, supported by the NGSI-LD specification, on the Smart Geo Layers Platform (Sgeol), a FIWARE-designed middleware platform that aims to facilitate the integration of data provided by heterogeneous sources into a smart city environment, as well as how to support application development.

14
  • VALDIGLEIS DA SILVA COSTA
  • Typical Hesitant Fuzzy Automata: Theory and applications

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • REGIVAN HUGO NUNES SANTIAGO
  • ANDERSON PAIVA CRUZ
  • HELIDA SALLES SANTOS
  • RENATA HAX SANDER REISER
  • Data: Dec 8, 2020


  • Show Abstract
  • As a method of trying to extrapolate the Church’s thesis, using the ideas of the fuzzy sets presented by Zadeh, the fuzzy automaton theory emerges in the late 1960s, as an extension of the finite automata theory, adding the possibility of computing with some level of uncertainty. Over the years, due to the maturation of the extensions of the fuzzy sets, different generalizations of fuzzy automata started to emerge in the literature, such as interval-valued fuzzy automata, intuitionist fuzzy automata, etc. Fuzzy automata, in addition to being the fundamental part of the fuzzy computation theory, also present a relative success in practical applications, mainly in the field of pattern recognition, through uncertainty modeling. This work presents a new generalization of fuzzy automata based on the definitions of typical hesitant fuzzy sets (which we will call typical hesitant fuzzy automata), as well as the motivation for this generalization and for bringing to the domain nof computing the possibility of working with uncertainties and also being able to work with hesitation. Therefore, this new generalization aims to enable new ways to face problems that were not easily modeled before, using only uncertainties. Besides, we will show ways to apply this new type of automata in the fields of digital image processing and data classification.

2019
Dissertations
1
  • LEANDRO DIAS BESERRA
  • CrashAwareDev: Supporting Software Development based on Crash Report Mining and Analysis 

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ROBERTA DE SOUZA COELHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • UIRA KULESZA
  • Data: Jan 31, 2019


  • Show Abstract
  • Currently,  several organizations have devoted much of the software development time for bug fixing. This is one of the reasons why crash reports are becoming more and more popular - since it is a way to centralize information received due to failures. However, we can think of another utility for such tools. The information stored over time could be used to prevent developers from making similar mistakes (bugs) to those they performed in the past. In this study we propose a way to transform this information into a set of tips to be presented to the developer while s/he is coding. We propose such tips to be included on Eclipse development environment, in the form of an Eclipse plug-in which alerts developers about code fragments with potential to cause failures. In addition, the application will be able to mine stack traces of failures occurring, identifying system methods with large recurrence of failures and provide this information in the Eclipse IDE. We plan to conduct a case study in a real development context to evaluate the proposed tool.

2
  • VALMIRO RIBEIRO DA SILVA
  • An investigation of biometric-based user predictability in the online game League of Legends

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • MARJORY CRISTIANY DA COSTA ABREU
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Feb 7, 2019


  • Show Abstract
  • Computer games has been consolidated as a favourite activity in the last years. Although such games were created to promote
    competition and promote self-improvement, there are some recurrent issues. One that has received the least amount of attention so far is the problem of "account sharing" which is when a player shares his/her account with more experienced players in order to progress in the game. The companies running those games tend to punish this behaviour, but this specific case is hard to identify. Since, the popularity of machine learning has never been higher, the aim of this study is to better understand how biometric data from online games behaves, to understand how the choice of character impacts a player and how different algorithms perform when we vary how frequently a sample is collected. The experiments showed through the use of statistic tests how consistent a player can be even when he/she changes characters or roles, what are the impacts of more training samples, how the tested
    machine learning algorithms are affected by how often we collect our samples, and how dimensionality reduction techniques, such as Principal Component Analysis affect our data, all providing more information about how this state of art game database works.

3
  • LUANA TALITA MATEUS DE SOUZA
  • Documentation of requirements and sharing of knowledge: A proposal from an ethnographic study

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • LYRENE FERNANDES DA SILVA
  • APUENA VIEIRA GOMES
  • RICARDO ARGENTON RAMOS
  • Data: Apr 26, 2019


  • Show Abstract
  • This paper presents an ethnographic study on the routine of two teams of requirements analysts in a software factory. The objective is to identify the challenges in producing and maintaining documentation for its various target audiences and to propose practices that improve the effectiveness of the sharing and use of the information collected and documented. It was developed an adaptation of an ethnographic process composed of the phases of: observation of the teams, interviews (with requirements analysts, team leaders and systems management) and material analysis. At the end of this process, the results collected are interpreted in a stage called triangulation - which structures and combines the observed events. The challenges identified were grouped into three broad categories: knowledge sharing, documentation and agile methodologies. After surveying these challenges, two surveys were applied to documentation audiences to understand their informational needs. Knowledge of the challenges and practices indicated will allow for: productivity gains, reduced communication costs among team members, reduced cost of sprint planning, reduction of in-person dependency of requirements analyst, and effective documentation. As contribution of this work are the own ethnographic process - constituted specifically for this research - and the benefits in the use of the suggested practices.

4
  • FÁBIO FREIRE DA SILVA JÚNIOR
  • One for all and all for one: an exploratory study on the aspects that support collaboration in game jams

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • UIRA KULESZA
  • NAZARENO ANDRADE
  • Data: Apr 29, 2019


  • Show Abstract
  • Game jams are events where people form teams with the purpose of developing functional game prototypes in a collaborative way and under thematic and time constraints. Although such events have grown in popularity, there is still little evidence in the scientific literature on how collaboration occurs in this type of event, and especially, what are the factors that facilitate such collaboration. This study aims to understand how facilitating aspects support collaboration in game development through jams. To this aim, a study was conducted during the Global Game Jam 2018 and 2019. Our findings describe the main practices during the event and what strategies are used by teams in order to facilitate the collaboration. In addition, mentors and experienced jammers are recognized as the main coordinators of these tasks, acting as mediators between those teams. This work also contributes to understanding and characterizing the collaborative work in jams. With collected and validated evidence from experts, we suggest a catalog of practices that support the interaction between teams in this type of event. These techniques may be applied in similar contexts to facilitate collaboration between teams.

5
  • YGOR ALCÂNTARA DE MEDEIROS
  • Prize Collecting Traveling Salesman Problem with Ridesharing

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Jul 1, 2019


  • Show Abstract
  • The Prize Collect Traveling Salesman Problem with Ridesharing is a model that merge elements of the classic problem PCTSP with ridesharing. The costs of the driver journey are reduced through the apportionment of expenses due to the sharing of accents in the vehicle used in the task of collecting prizes. The tasks in the route are selected according to the routing model with collection of prizes, therefore considering penalties for the non-attendance of existing task and additionally determining the fulfillment of a minimum demand of tasks. The demand for collaborative transport is protected by restrictions that ensure passengers are transported to their destination. Likewise, the apportionment costs will be less than or equal to the tariff limits established by the passengers. The present work presents the mathematical formulation for the problem, validates the model in an exact solution process and examines the performance of two algorithms that execute construction steps with exact criteria and six with heuristic criteria. Accurate step-by-step algorithms aim to create anchoring results to evaluate algorithm performance with heuristic decisions. Three instance groups are also proposed for the problem in order to allow future experimentation of new algorithms. Finally, it is concluded that the algorithms of heuristic steps achieve promising performance for the problem examined.

6
  • HIAGO MAYK GOMES DE ARAÚJO ROCHA
  • Mapping and Routing Problem: Hybrid Bioinspired Optimization Solution

  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • ANTONIO CARLOS SCHNEIDER BECK FILHO
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Jul 19, 2019


  • Show Abstract
  • The advances of integrated circuit fabrication technology, allowed by the reduction of transistors size, make possible the creation of multiprocessed complex systems inside a single chip, named Multiprocessors System of Chip (MPSoC). To allow communication among the different cores, one of the main communication model used currently is the Network on Chip (NoC) which shows more scalability than traditional bus solution.  In spite of the expected potential performance improvement due to task parallelism, to achieve a real performance improvement it is necessary an efficient available resource management in the system, such as processing cores and communication links. This management is related, among other aspects, to how tasks are mapped in the MPSoC cores and which NoC channels are used to provide communication routes for tasks. In the literature, these aspects are handled individually, even though, they are highly correlated. Based on this principle, in this work, it is presented a mathematical formulation of the Mapping and Routing Problem (PMR) that combines both problems. Additionally, in order to find optimized mapping solutions, this work presents three proposals of static mapping and one of the dynamic mapping.In the static mapping context, it is presented bioinspired hybrid optimization strategies (Genetic, Memetic and Transgenetic). These strategies present a general approach to find mapping solution and internally, it is used an exact routing fitness function evaluation based on the proposed model.In the dynamic mapping context, it is proposed an algorithm that uses the Transgenetic operator to provide the task allocation by demand at run time. The algorithms were implemented and its results were simulated using a NoC simulation tool. In order to compare to state of art, three algorithms from literature were also implemented and simulated. The results show that approaches able to capture more deeply the features of the architecture are more efficient. More specifically, the Transgenetic algorithm presents the best results for latency and energy. Furthermore, it was possible to use the Transgenetic aspects to propose a dynamic solution that can be used when the system does not know the application behavior.

7
  • SIDEMAR FIDELES CEZARIO
  • Application of the OWA Operator in the Beam Angle Optimization and Intensity Problems in IMRT

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • THALITA MONTEIRO OBAL
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Jul 23, 2019


  • Show Abstract
  • Radiation therapy is an extremely important method for cancer treatment. The main challenge is to deliver, at least, the prescribed dose to the tumor, while avoiding to expose healthy organs to radiation beyond defined limits. The intensity modulated teletherapy (IMRT) is an advanced mode of high-precision radiotherapy. Over the years, many researchers have presented algorithms to solve the main difficulty of IMRT treatments, which consists in automating the selection of beam angles for an adequate dose distribution. This research presents an algorithm that seeks the ideal balance between a set of angles and a dose distribution that respects medical prescriptions inherent to the treatment. The proposed algorithm uses two new mathematical models and the Ordered Weighted Average (OWA) operator as a criterion of preference to choose the best solution.

8
  • THIAGO SOARES MARQUES
  • Multi-objective Optimization of Beam Angle and Fluence Map for IMRT Radiation Therapy

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • THALITA MONTEIRO OBAL
  • Data: Jul 23, 2019


  • Show Abstract
  • The Beam Angle Optimization (BAO) and the Fluency Map Optimization (FMO) are two problems that arise from the planning of radiation therapy for cancer treatments. One of the main modes of radiation therapy is the IMRT (Intensity Modulated Radiation Therapy) which consists in using computer-controlled linear accelerators to deliver precise radiation doses to the tumor. The IMRT aims at finding a balance between exposing the region of the tumor and, at the same time, preventing, as much as possible, healthy tissues that surround the tumor to receive radiation. This study presents linear and quadratic programming models  for the FMO and  single and multiobjective algorithms that use those models for the BAO problem.  This study also reports the results of computational experiments on a set of real instances.

9
  • YSTALLONNE CARLOS DA SILVA ALVES
  • Quantum Computing Application in Super-Resolution for Surveillance Imagery

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • ARAKEN DE MEDEIROS SANTOS
  • BRUNO MOTTA DE CARVALHO
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Jul 31, 2019


  • Show Abstract
  • Super-Resolution (SR) is a technique that has been exhaustively exploited and incorporates
    strategic possibilities to image processing. As quantum computers gradually evolve
    and provide unconditional proof of a computational advantage at solving intractable
    problems over their classical counterparts, quantum computing emerges with the compelling
    argument of offering exponential speed-up to process computationally expensive operations.
    Envisioning the design of parallel, quantum-ready algorithms for near-term noisy devices
    and igniting Rapid and Accurate Image Super Resolution (RAISR), an implementation
    applying variational quantum computation is demonstrated for enhancing degraded
    surveillance imagery. This study proposes an approach that combines the benefits of
    RAISR, a non hallucinating and computationally efficient method, and Variational
    Quantum Eigensolver (VQE), a hybrid classical-quantum algorithm, to conduct SR with
    the support of a quantum computer, while preserving quantitative performance in terms
    of Image Quality Assessment (IQA). It covers the generation of additional hash-based
    filters learned with the classical implementation of the SR technique, in order to further
    explore performance improvements, produce images that are significantly sharper, and
    induce the learning of more powerful upscaling filters with integrated enhancement effects.
    As a result, it extends the potential of applying RAISR to improve low quality assets
    generated by low cost cameras, as well as fosters the eventual implementation of robust
    image enhancement methods powered by the use of quantum computation.

10
  • ALISSON PATRICK MEDEIROS DE LIMA
  • A Dynamic Elasticity Control Approach Tailored to Slice-Defined Cloud-Network Systems

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • LILIANE RIBEIRO DA SILVA
  • NELIO ALESSANDRO AZEVEDO CACHO
  • RAFAEL PASQUINI
  • SILVIO COSTA SAMPAIO
  • Data: Jul 31, 2019


  • Show Abstract
  • The design of efficient elasticity control mechanisms for dynamic resource allocation is crucial to increase the efficiency of future cloud-network sliced networks. Current elasticity control mechanisms proposed for cloud or network-slice environments only consider cloud or network resources. Cloud-network sliced networks will obtain substantial gains from these mechanisms only if they consider both cloud and network resources in an integrated fashion. Moreover, they must enable a fast enough orchestration of these resources but preserving the performance and isolation of the cloud-network slices. In this work, we propose a elasticity control approach to dynamically control and manage the orchestration of resources in cloud-network slice-defined systems. A prototype was developed, and its preliminary results suggest that the proposing approach is a viable solution to provide elasticity in environments defined by cloud-network slices.

11
  • SEBASTIÃO RICARDO COSTA RODRIGUES
  • A framework to Integrate Programming Learning Platforms and Computer Science Unplugged

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROBERTO ALMEIDA BITTENCOURT
  • Data: Jul 31, 2019


  • Show Abstract
  • The world in today has required more and more individuals who are capable about the use of technology regardless of the area of activity. Programming teaching has been a recurring research target in the scientific community. Several programming teaching approaches like digital games, programming paradigms (visual block-based), platforms of gamification, robotics, among others has been proposed. In Brazil, mainly in the public school system, where there are several problems of technological infrastructure, such as lack of computers, laboratories and internet, turns on hard make interventions towards teaching programming. This work aims to present a framework that allows the integration of tangible objects to existing programming teaching platforms through Computer Vision techniques and approaches based on Computer Science Unplugged (CS Unplugged). For that, studies were carried out that looked for evidence that could provide answers to the research questions around the goal of this work. A Systematic Literature Review (SLR), an exploratory and another experimental study were carried out aim prospecting requirements and contextual analysis, implementation of suitable techniques and strategies, and evaluation if the proposed approach fits well in a programming teaching context. The results evidenced the possibility concreteness integration of educational platforms based on block-based visual programming by Computer Vision techniques with CS Unplugged activities and it gives a positive experience in the programming teaching-learning process. It can be concluded that the proposed approach, which aims to promote the integration of tangible objects with programming teaching platforms, presents advantages by providing a useful experience by bringing constructivist activities closer to the virtual resources made available by digital programming platforms.

12
  • César Augusto Perdigão Batista
  • KNoT-FI: A FIWARE-based Integrated Environment for the Development of the Internet of Things Applications

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • EVERTON RANIELLY DE SOUSA CAVALCANTE
  • GIBEON SOARES DE AQUINO JUNIOR
  • KIEV SANTOS DA GAMA
  • THAIS VASCONCELOS BATISTA
  • Data: Aug 5, 2019


  • Show Abstract
  • With the rising popularity of IoT, several platforms have been proposed for supporting the development of IoT applications. KNoT and FIWARE are examples of open-source platforms with a complementary purpose. While KNoT is a gateway-based middleware to embed connectivity into devices and route messages between them and applications, FIWARE provides a rich ecosystem with standardized APIs for developing IoT applications. Aiming at combining the KNoT capability of integrating a plethora of devices with the high-level abstractions provided by the FIWARE platform, this work presents the KNoT-FI environment. It integrates KNoT and FIWARE towards easing the development of IoT applications and seamlessly using capabilities of devices with or without native Internet connection through the FIWARE advanced interfaces. This work also presents a validation of KNoT-FI in the development of a real-world application that automatically manages lighting, temperature, and ambient sound in smart buildings.

13
  • EDUARDO HENRIQUE ROCHA DO NASCIMENTO
  • A Gamified Approach to Support the Software Testing Courses

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ANDRE MAURICIO CUNHA CAMPOS
  • CHARLES ANDRYE GALVAO MADEIRA
  • AYLA DÉBORA DANTAS DE SOUZA REBOUÇAS
  • Data: Aug 6, 2019


  • Show Abstract
  • Software Testing is an essential subarea of Software Engineering, in which the responsibi-
    lity is to make sure the software quality through its techniques and practices. The testing

    activities are present in the entire process of software construction, from the software
    specication, going by development, until the implementation. Even have been a vital

    involvement in the development software process, Software Testing has its techniques un-
    derused by software companies, where the negligence has a direct impact on software

    quality. Some reason pointing out by literature for this fact is the testing activities are
    costly, dicult and tedious. This problem is found in both industry and academy, in which
    there is a correlation, where some problems are born in the academy and extend into the
    industry. As a possible solution to treat this kind of problem there is the gamication,
    that conceptually is the use of game design elements in environments non-game, with the
    purpose of increasing engagement and motivation of people involved in that environment.
    Recent studies have shown the grown adoption of gamied strategies in the Software

    Testing teaching to treat motivational problems of students. Given this context, this re-
    search work intends to use the gamication joined Software Testing topics to deal with

    the lack of motivation of students to make specic testing activities. To achieve this ob-
    jective, were carried out a search in the literature seeking gamication methodologies and

    a systematic mapping study that gathered studies about the application of gamication
    and games in Software Testing area. The gamication methodology chosen was the Level
    Up, that describes an interactive and systematic process to concept gamied approaches
    for educational environments. This methodology provides a set of stages that cover the
    ideate stage, experimentation and evolution of the approach. The evaluation of the gami-
    ed approach proposal was performed in the experimentation stage, where two groups of

    students taking part in two activities, one non-gamied and another gamied, and were
    submitted to answer a qualitative questionnaire about satisfaction and acceptation of the

    gamied approach. The information collected with the questionnaire and the observati-
    ons made during the experimentation will be used to adjust the approach proposed in the

    evolution stage, not yet performed. The results of the evaluation shown that the gamied
    approach did impact in the students, where was possible identify aspects related with
    the interaction of students and some inconsistencies of the approach that will need to be
    readapted or removed.

14
  • ADELSON DIAS DE ARAÚJO JÚNIOR
  • Predspot: Predicting Crime Hotspots with Machine Learning

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • MARJORY CRISTIANY DA COSTA ABREU
  • NELIO ALESSANDRO AZEVEDO CACHO
  • LEONARDO CESAR TEONACIO BEZERRA
  • OURANIA KOUNADI
  • Data: Sep 24, 2019


  • Show Abstract
  • Smarter cities are largely adopting data infrastructure and analysis to improve decision making for public safety issues. Although traditional hotspot policing methods have shown benefits in reducing crime, previous studies suggest that the adoption of predictive techniques can produce more accurate estimates for future crime concentration. In this work we propose a framework to generate future hotspots using spatiotemporal features and other geographic information from OpenStreetMap. We implemented an open source Python-package called predspot to support efficient hotspots prediction following the steps suggested in the framework. To evaluate the predictive approach against the traditional methodology implemented by Natal’s police department, we compared two crime mapping methods (KGrid and KDE) and two efficient machine learning algorithms (Random Forest and Gradient Boosting) in twelve crime scenarios, considering burglary, violent and drugs crimes. The results indicate that our predictive approach estimate hotspots 1.6-5.1 times better than the analysts baseline. A feature importance analysis were extracted from the models to account with how much the selected variables helped the predictions and to discuss the modelling strategy we conducted.

15
  • DENIS JOSÉ SOUSA DE ALBUQUERQUE
  • Identification of problems and hot topics for developers of Big Data applications on the Apache Spark framework

  • Advisor : UMBERTO SOUZA DA COSTA
  • COMMITTEE MEMBERS :
  • UMBERTO SOUZA DA COSTA
  • MARTIN ALEJANDRO MUSICANTE
  • MARCUS ALEXANDRE NUNES
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Sep 27, 2019


  • Show Abstract
  • This research aims to identify and classify the main difficulties and issues of interest of Apache Spark application developers regarding the framewok usage. For this purpose, we use the Latent Dirichlet Allocation algorithm to perform a probabilistic modeling of topics on information extracted from Stack Overflow, since the manual inspection of the entire dataset is not feasible. From the knowledge obtained by the comprehensive study of related works, we established and applied a methodology based on the practices usually employed. We developed Spark applications for the automated execution os tasks, such as the data selection and preparation, the discovery of topics - applying the probabilistic modeling algorithm with various configurations - and metrics computation. Analyzes of the results were carried by a group of 5 researchers: two doctor professors, one doctoral student and two master students. Based on the semantic analysis of the labels assigned to each of the identified topics, a taxonomy of interests and difficulties was constructed. Finally, we ranked the most important themes according to the various calculated metrics and compared the methods and results of our study with those presented in another work.

16
  • TIAGO HENRIQUE DA SILVA LEITE
  • Do hackathon or game jam projects continue? A study on the continuity of projects developed in time-bounded events.

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • CLEIDSON RONALD BOTELHO DE SOUZA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • UIRA KULESZA
  • Data: Oct 21, 2019


  • Show Abstract
  • Time-bounded collaborative events, such as hackathons, game jams, among others, have become quite popular in recent years and have been drawing the attention of the scientific community. Existing research has studied various aspects of these events, while the post-event, i.e. what occurs after these events, has received little attention. In addition, existing studies are very limited to events of the same nature, such as civic hackathons, industrial or academic, as well as for game jams, where the existing works contemplate only one type of such events. Research on the phenomena surrounding the post-event, on a wider granularity, is still scarce. In this study, we address this gap by presenting the results of an exploratory study, featuring analyses of two large hackathons and a global game jam. Most participants in these events expressed their intention to continue with the social relations that were developed in them. However, they indicated that there is no planning to continue the development of the projects developed, although there is interest in working on other projects with the team formed at the event. The main objective is to validate and deepen these questions, bringing contributions to the positive and negative aspects of the discontinuity of these projects and the maintenance of the social bonds formed during the events.

17
  • LUIZ RANYER DE ARAÚJO LOPES
  • Implementing the Versatile Papillary Cryptographic Algorithm in the OpenSSL Library

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • ISAAC DE LIMA OLIVEIRA FILHO
  • KARLA DARLENE NEPOMUCENO RAMOS
  • Data: Oct 22, 2019


  • Show Abstract
  • In a globalized and highly exposed world, information is one of the most valuable assets in the world. With the increasing increase of information technologies and the large volume of connected and interconnected devices, internet of things (IoT), which contributes to the growing amount of transmitted data, cyber attacks have become part of the daily lives of businesses and people. . These attacks can pose direct risks to users. Thus, there is a growing demand to keep this information free of risks and dangers regarding its integrity, authenticity and confidentiality. In this sense, information security seeks to protect this information by implementing security policies and data protection mechanisms, which must address the appropriate balance of human and technical aspects of information security. About the protection mechanisms, encryption, one of the most used to keep them safe. This protection is directly related to the types of cryptographic algorithms that can be used in the most diverse contexts. In this case, we approach the use of cryptographic algorithms inserted in the process of communication between client / server via OpenSSL tool. In order to investigate the level of security offered by OpenSSL, this paper addresses the integration of the Papillium Versatile encryption algorithm to the set of ciphers integrated with OpenSSL itself. In addition, we seek to measure the level of security inherent in the use of Versatile Papillion, within the process of protection in data transmission between client and server. Through an experimental evaluation it was possible to validate the implementation performed. It can be observed that the requests made had a small average increase in latency, but this cost is offset by the increased security on the platform.


18
  • MICKAEL RANINSON CARNEIRO FIGUEREDO
  • A Tourism Multi-user Recommendation Approach Based on Social Medias Photos

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • NELIO ALESSANDRO AZEVEDO CACHO
  • BRUNO MOTTA DE CARVALHO
  • ANTONIO CARLOS GAY THOME
  • DANIEL SABINO AMORIM DE ARAUJO
  • JOSEFINO CABRAL MELO LIMA
  • Data: Nov 27, 2019


  • Show Abstract
  • The tourism sector is one of the most relevant economic activity in nowdays. In this way, it is important invest in different approaches to create a great experience during visitors trips in one destination. In a context of Smart Cities, the ideia of Smart Destination appears as one solution to improve the tourism experience using techonlogy to support visitors in one Smart City. The proposed study creates an approach to support a Smart Tourism Destination to create a better trip planning based on photos from social medias. The research aims to create recommendation to single or group of tourists using techniques of image classification and fuzzy inference to map tourists preferences. Through the fuzzy inference system and using the tourism experts knowledge inside a recommendation system, the proposed approach is able to create personalized recommendations using attractions from one Smart Destination

19
  • HORTEVAN MARROCOS FRUTUOSO
  • Programmable Adaptive Transducers and Their Application in Dialog Agent Development

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • ANDERSON PAIVA CRUZ
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • CARLOS AUGUSTO PROLO
  • HELIDA SALLES SANTOS
  • Data: Dec 13, 2019


  • Show Abstract
  • The present work aims to present a dialog agent model built from the Programmable Adaptive Transducer, a device based on a deterministic nite automaton with adaptive technology capable of extending its own state structure and rules of transitions, so that
    the recognized language by the transducer becomes a limited subset of natural but expandable language. This device is then used to model a dialog agent capable of receiving sentences in Portuguese and reacting to these sentences presenting a behavior semantically compatible with the sentence passed by the user, responding to it according to the denitions registered in the agent's knowledge base. Adding to it the ability to allow the expansion of this knowledge base from the user's own sentences, through an appropriate syntax for such.

Thesis
1
  • MAXWEEL SILVA CARMO
  • A Framework for Efficient WLAN-Sharing WiFi Systems by Network Slicing in the Context of 5G Ultra Dense Networks

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • AUGUSTO JOSE VENANCIO NETO
  • DANIEL CORUJO
  • GIBEON SOARES DE AQUINO JUNIOR
  • RUI LUIS ANDRADE AGUIAR
  • THAIS VASCONCELOS BATISTA
  • Data: Jan 29, 2019


  • Show Abstract
  • The realization of innovative use cases involving data communication has challenged the established communication networking approaches. This thesis aims to evolve current WLAN Wi-Fi systems with shared access facilities to meet the challenges of efficiently increasing the massive demand for mobile data in 5G use cases, namely UDN (Ultra-Dense Networking) and URLLC (Ultra-reliable and low-latency Communications). New complementary aspects of emerging technologies in 5G systems, namely, virtualization of network functions [7], computational fog [8], software-defined networking, and others, were investigated in the prospect to create a unique and innovative framework for Wi-Fi WLAN-sharing systems. This thesis proposes the WISE framework (WLAN slIcing SErvice), which applies network slicing technique for the first time in WLAN-sharing enabled Wi-Fi CPEs (Consumer Premise Equipments) located closer to users and things, in perspective to offer differentiated network services featuring isolation, independence, and increased performance. Moreover, the WISE framework exploits fog computing technology as a way to expand the computational capabilities of off-the-shelf CPEs to afford running part of the functionalities featuring the target framework.

2
  • ITAMIR DE MORAIS BARROCA FILHO
  • Architectural design of IoT-based healthcare applications

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • GIBEON SOARES DE AQUINO JUNIOR
  • ROSSANA MARIA DE CASTRO ANDRADE
  • THAIS VASCONCELOS BATISTA
  • UIRA KULESZA
  • VINICIUS CARDOSO GARCIA
  • Data: Feb 8, 2019


  • Show Abstract
  • The myriad of connected things promoted by the Internet of Things (IoT) and the data captured by them is making possible the development of applications in various markets, such as transportation, buildings, energy, home, industrial and healthcare. Concerning the healthcare market, it is expected the development of these applications as part of the future, since it can improve e-Health to allow hospitals to operate more efficiently and patients to receive better treatment. The IoT can be the main enabler for distributed healthcare applications, thus having a significant potential to contribute to the overall decrease of healthcare costs while increasing the health outcomes. However, there are a lot of challenges in the development and deployment of this kind of application, such as interoperability, availability, performance, and security. The complex and heterogeneous nature of IoT-based healthcare applications makes its design, development and deployment difficult. It also causes an increase in the development cost, as well as an interoperability problem with the existing systems. To contribute to solve the aforementioned challenges, this thesis aims at improving the understanding and systematization of the IoT-based healthcare applications’ architectural design. It proposes a software reference architecture, named Reference Architecture for IoT-based Healthcare Applications (RAH), to systematically organize the main elements of these applications, its responsibilities and interactions, promoting a common understanding of these applications’ architecture. To establish RAH, a systematic mapping study of existing publications regarding IoT-based healthcare applications was performed, as well the study about quality attributes, tactics, architectural pattern and styles used in software engineering. As a result, RAH presents domain knowledge and software architectural solutions (i.e., architectural patterns and tactics) documented using architectural views. To assess RAH, a case study was performed by instantiating it to design the software architecture of a computational platform based on the Internet of Things (IoT) infrastructure to allow the intelligent remote monitoring of the patient’s health data (biometrics). With this platform, the clinical staff can be alerted of the health events that require immediate intervention and then prevent unwanted complications. Results evidenced that RAH is a viable reference architecture to guide the development of secure, interoperable, available, and efficient IoT-based healthcare applications, bringing contributionsfor the areas of e-Health and software architecture.

3
  • ROMULO DE OLIVEIRA NUNES
  • Dynamic Feature Selection for Ensemble Systems

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • DANIEL SABINO AMORIM DE ARAUJO
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Feb 22, 2019


  • Show Abstract
  • In machine learning, the data preprocessing has the aim to improve the data quality, through to analyze and to identify of problems in it. So, the machine learning technique will receive the data of a good quality. The feature selection is one of the most important pre-processing phases. Its main aim is to choose the best subset that represents the dataset, aiming to reduce the dimensionality and to increase the classifier performance. There are different features selection approaches, on of them is the Dynamic Feature Selection. The Dynamic Feature Selection selects the best subset of attributes for each instance, instead of only one subset for a full dataset. After to select a more compact data representation, the next step in the classification is to choose the model to classify the data. This model can be composed by a single classifier or by a system with multiples classifiers, known as Ensembles classifier. These systems to combine the output to obtain a final answer for the system. For these systems to get better performance than a single classifier it is necessary to promote diversity between the components of the system. So, it is necessary that the base classifiers do not make mistakes for the same patterns. For this, the diversity is considered one of the most important aspects to use ensembles. The aim of the work is to use the Dynamic Feature Selection in Ensembles systems. To this,  three versions were developed to adapt this feature selection and to create diversity between the classifiers of the ensemble. The versions were compared using different selection rates in an ensemble with five classifiers. After this, the best version was tested with different ensemble sizes.

4
  • RUI EDUARDO BRASILEIRO PAIVA
  • A lattice extension for Overlaps and naBL-algebras

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • FLAULLES BOONE BERGAMASCHI
  • JORGE PETRUCIO VIANA
  • MARCO CERAMI
  • REGIVAN HUGO NUNES SANTIAGO
  • UMBERTO RIVIECCIO
  • Data: Aug 5, 2019


  • Show Abstract
  • Overlap functions were introduced as a class of bivariate aggregation functions on [0, 1] to be applied in the image processing field. Many researchers have begun to develop overlap functions to explore their potential in different scenarios, such as problems involving classification or decision making. Recently, a non-associative generalization of Hájek’s BL-algebras (naBL-algebras) were investigated from the perspective of overlap functions as a residuated application. In this work, we generalize the notion of overlap functions for the lattice context and introduce a weaker definition, called a quasi-overlap, that arises from definition, called a quasi-overlap, that arises from the removal of the continuity condition. To this end, the main properties of (quasi-) overlaps over bounded lattices, namely: convex sum, migrativity, homogeneity, idempotency, and cancellation law are investigated, as well as an overlap characterization of Archimedian overlap functions is presented. In addition, we formalized the residual principle for the case of quasi-overlap functions on lattices and their respective induced implications, as well as revealing that the class of quasi-overlap functions that fulfill the residual principle is the same class of continuous functions according to topology of Scott. As a consequence, we provide a new generalization of the notion of naBL-algebras based on overlap over lattices.

5
  • ANTONIA JOCIVANIA PINHEIRO
  • On Algebras for Interval-Valued Fuzzy Logic

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • REGIVAN HUGO NUNES SANTIAGO
  • JOAO MARCOS DE ALMEIDA
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • WELDON ALEXANDER LODWICK
  • JORGE PETRUCIO VIANA
  • FLAULLES BOONE BERGAMASCHI
  • GRAÇALIZ PEREIRA DIMURO
  • Data: Aug 30, 2019


  • Show Abstract
  • On Algebras for Interval-Valued Fuzzy Logic

6
  • WALDSON PATRICIO DO NASCIMENTO LEANDRO
  • Real-time Data Processing and Multimodal Interactive Visualization Using GPU and Data Structures for Geology and Geophysics

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • JOAQUIM BENTO CAVALCANTE NETO
  • MARCIO EDUARDO KREUTZ
  • SELAN RODRIGUES DOS SANTOS
  • WILFREDO BLANCO FIGUEROLA
  • Data: Aug 30, 2019


  • Show Abstract
  • Geophysics is an area of natural science that is concerned with the physical processes and properties of the Earth and its surrounding space environment, as well as the use of quantitative methods for their analysis. With the advent of new sensors and the need for acquiring data over wider regions or with higher resolutions, the amount of data being analyzed has increased much more rapidly than the ability of process them in real time on regular workstations. Moreover, if we consider joining data from different sensors for simultaneously displaying them, it becomes a complicated task to perform using conventional visualization algorithms and techniques. In this work, we explore methods for processing large geophysical and geological data in real time and for visualizing massive multivariate data within interactive time. The usage of several devices and methods are part of the commonly used prospection and monitoring activities of these areas, and both benefit from faster processing and interactive visualization, bringing new analysis and interpretation possibilities of these massive datasets.

7
  • KARLIANE MEDEIROS OVIDIO VALE
  • The Proposal of an Automated Process of Inclusion of New Instances in Semi-Supervised Learning Algorithms

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • DANIEL SABINO AMORIM DE ARAUJO
  • DIEGO SILVEIRA COSTA NASCIMENTO
  • FLAVIUS DA LUZ E GORGONIO
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Nov 22, 2019


  • Show Abstract
  • Machine learning is a field of artficial inteligence that is dedicated to the study and
    development of computational techniques which obtain knowledge through acumulated
    experiences. According to the nature of information provided, machine learning was inicially
    divided into two types: supervised and unsupervised learning. In supervised learning,
    the data used in training have labels, while in the unsupervised learning the instances
    to be trained have no labels. Over the years the academic community started studying
    the third type of learning that is regarded as the middle ground between supervised and
    unsupervised learning and is known as semi-supervised learning. In this type of learning,
    most of training set labels are unknown, but there is a small part of data that has known
    labels.The semi-supervised learning is attractive because of its potential to use labeled
    and unlabelled data to achieve better performance than supervised learning. This paper
    consists of a study in the field of semi-supervised learning and implements changes on
    the self-training and co-training algorithms. In the literature, it´s common to develop
    researches that change the structure of these algorithms, however, none of them propose
    some variation in the rate of inclusion of new instances in the labeled data set, which is
    the main purpose of this work. In order to achieve this goal, three methods are proposed:
    FlexCon-G, FlexCon e FlexCon-C. The main diference between this methods is: 1) In the
    way that they perform the calculation of a new value for the minimum confidence rate
    to include new patterns and 2) The strategy used to choose a label of each instance. In
    order to evaluate the proposed methods, we will performed experimentations on 30 datasets
    with diversified characteristics. The obtained results indicate that the three proposed
    methods perform better than original self-training and co-training methods in most cases.

8
  • HUGO FARIA MELO
  • Identifying and Analyzing Exception Handling Practices: A Developers´ Point of View 

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • UIRA KULESZA
  • EIJI ADACHI MEDEIROS BARBOSA
  • CHRISTOPH TREUD
  • FERNANDO JOSÉ CASTOR DE LIMA FILHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Nov 29, 2019


  • Show Abstract

  • The exception handling mechanism is a feature present in most modern programming languages for the development of fault tolerant systems. Despite being an older feature of the Java language, developers still struggle to use exception handling for even the most basic problems. Although the exception handling of a system is essentially a design problem, few works are intended to investigate Java exception handling from the developers' point of view. In this thesis we explore the decisions made and solutions adopted by Java developers for exception handling in their projects. In total we conducted 6 studies, which consulted a total of 423 developers, including interviews and surveys, and analyzed the source code of 240 Java projects hosted on GitHub. Our results show that decisions regarding Java exception handling are not usually documented, and sometimes not even discussed verbally among the developement team; that developers believe their code follows the solutions adopted; that developers learn about exception handling solutions through informal meetings and code inspection; that the solutions adopted in the project are verified in the source code through code review. We analyzed Java source code from 240 projects to verify compliance of 7 of the 31 Java exception handling solutions we identified, and found that the code often fails to deliver what was planned. Our research reveals a weakness in the design, implementation, and verification of Java exception handling that will help researchers and the community to design tools and other solutions that help developers to apply exception handling effectively.

9
  • ALLISSON DANTAS DE OLIVEIRA
  • MalariaApp: A Low-Cost System for Diagnosing Malaria on Thin Blood Smears using Mobile Devices

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • VALTER FERREIRA DE ANDRADE NETO
  • JONES OLIVEIRA DE ALBUQUERQUE
  • DANIEL LÓPEZ CODINA
  • Data: Nov 29, 2019


  • Show Abstract
  • Nowadays, a variety of mobile devices are available and accessible to the general population, making it an indispensable item for communication and use of various services. In this same direction, these devices have become quite useful in several areas of expertise, including the medical field. With the integration of these devices and applications, it is possible to perform preventive work, helping to combat outbreaks and even prevent epidemics. According to the World Health Organization (2017), malaria is one of the most lethal infectious diseases in the world, mainly in the region of sub-Saharan Africa, while in Brazil it is more frequent the occurrence of cases in the Amazon region. For the diagnosis of malaria it is essential to have trained and experienced technicians to identify the species and phases of the disease, a crucial part to define the ideal dosages of administering medication to patients. In this work, we propose a low-cost malaria diagnosis system using mobile devices, where some segmentation, digital image processing, and convolutional neural networks techniques are applied to perform cell counting, parasitemia estimation, and Plasmodium parasite classification in the species P.falciparum and P.vivax in the trophozoite step. A prototype with 3D parts and electronic automation was proposed to perform the scanning and imaging of blood slides to integrate with the mobile system and perform the on-site diagnosis, without the need for changing microscopic equipment, thus, based on the premise of low cost. A 93% accuracy was obtained in a convolutional neural network train model. In view of this, it is possible to break barriers of accessibility in countries with few resources in the use of diagnostic tools and screening of diseases.

10
  • JOSÉ GOMES LOPES FILHO
  • The Traveling Salesamn with Collect of Opatative Bonuses, Passengers, Collection Time, and Time Window (PCVP-DJT).

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • PAULO HENRIQUE ASCONAVIETA DA SILVA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Dec 13, 2019


  • Show Abstract
  • The paper examines a variant of the Traveling Salasman Problema called The Traveling Salesamn with Collect of Opatative Bonuses, Passengers, Collection Time, and Time Window (PCVP-DJT).This is a variant that involves vehicle routing, passenger ridesharing, and the execution of a courier's tasks. Two mathematical formulations are presented for the problem and validated through a computational experiment employing a mathematical solver. Four heuristic algorithms are proposed, being three hybrid metaheuristic algorithms. Computational results are presented. Future work is proposed. 

11
  • ANNAXSUEL ARAUJO DE LIMA
  • Multidimensional Fuzzy Sets

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • EDUARDO SILVA PALMEIRA
  • HELIDA SALLES SANTOS
  • REGIVAN HUGO NUNES SANTIAGO
  • RENATA HAX SANDER REISER
  • Data: Dec 20, 2019


  • Show Abstract
  • Since arising fuzzy set theory many extensions proposals have been given, one of them is the n-dimensional fuzzy set in which its elements are tuples of size n whose components are values in [0, 1], ordered in increasing form, called n-dimensional intervals. Generally, these sets are used to develop tools that aid in modeling situations involving decision-making where given a problem and an alternative, each n-dimensional interval represents the opinion of n specialists on the degree to which an alternative meets a given criterion or attribute for this problem. However, this approach is not able to deal with situations in which a particular expert can, for example, refrain from any decision-making criteria, and therefore, we would have in the same problem coexisting n-dimensional intervals with different values of n or where the set of specialists changes for each pair alternative/attribute. Thus, we need a new fuzzy set extension in which its elements (intervals) can have any dimensions. In this work, we present the concept of multidimensional fuzzy sets as a generalization of the n-dimensional fuzzy sets in which the elements can have different dimensions. We also present a way to generate comparisons (ordering) of these elements of different dimensions, discuss conditions under which these sets have lattice structure and introduce the concepts of admissible orders, multidimensional aggregation functions and fuzzy negations on multidimensional fuzzy sets. In addition, we deepen studies on ordinal sums of fuzzy negations.

2018
Dissertations
1
  • THIAGO NASCIMENTO DA SILVA
  • Nelson's logic S and its algebraic semantics

  • Advisor : UMBERTO RIVIECCIO
  • COMMITTEE MEMBERS :
  • JOAO MARCOS DE ALMEIDA
  • UMBERTO RIVIECCIO
  • HUGO LUIZ MARIANO
  • Data: Jan 25, 2018


  • Show Abstract
  • Besides the better-known Nelson logic (N3) and paraconsistent Nelson logic (N4), David Nelson introduced, in the 1959 paper "Negation and separation of concepts in constructive systems”, with motivations of arithmetic and constructibility, a logic that he called “S”. In the present study, the logic is defined by means of a calculus (which crucially lacks the contraction rule) having infinitely many rule schemata, and no semantics is provided for it.

    We look at the propositional fragment of S, showing that it is algebraizable (in fact, implicative) in the sense of Blok & Pigozzi with respect to a class of involutive residuated lattices. We thus provide the first known (algebraic) semantics for S as well as a Hilbert-style calculus equivalent to Nelson’s presentation. We also compare S with the other logics in the Nelson family N3 and N4.

2
  • BRENNER HUMBERTO OJEDA RIOS
  • Hybridization of Metaheuristics with Methods Based on Linear Programming for the Traveling Car Renter Salesman Problem


  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 2, 2018


  • Show Abstract
  • The Traveling Car Renter Salesman Problem, or simply Traveling Car Renter Problem (CaRS), is a generalization of the Traveling Salesman Problem (TSP) where the tour can be decomposed into contiguous paths that are traveled by different rented cars. The objective is to construct a minimal cost Hamiltonian circuit, considering the penalty paid for changing cars in the tour.  This penalty is the cost of returning a car to the city where it was rented. CaRS is classified as an NP-hard problem. This work studies the CaRS version classified as: complete, total, unrestricted, with no repetition, free and symmetric. This research  is focused on hybrid procedures that combine metaheuristics and methods based on Linear Programming (LP). The following methods were investigated: scientific algorithms (ScA), evolutionary algorithms (EA), variable neighborhood descent (VND), adaptive local search (ASLP) and a new variant of ALSP called iterated adaptive local search (IALSP). The following techniques are proposed to deal with CaRS:  ScA+ALSP, EA+IALSP, ScA+IALSP and ScA+VND+IALSP. A mixed integer programming model is proposed for CaRS which was used in the ALSP and IALSP. Non-parametric tests were used to compare the algorithms within a set of instances from the literature.

3
  • DIEGO DE AZEVEDO OLIVEIRA
  • BTestBox: a testing tool for B implementations

  • Advisor : DAVID BORIS PAUL DEHARBE
  • COMMITTEE MEMBERS :
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • DAVID BORIS PAUL DEHARBE
  • VALÉRIO GUTEMBERG DE MEDEIROS JUNIOR
  • Data: Feb 5, 2018


  • Show Abstract
  • Software needs to be safe and correct. From that assumption, new technologies and techniques
    are developed to prove the competencies of a program. This safety necessity is more
    relevant when considering critical systems, such as railways and avionics systems. The use
    of formal methods in the construction of software tries to solve this problem. When using B
    in Atelier-B, after proving the components of a project is necessary to translate to the desired
    language. This translation occurs using B translators and compilers. Usually, the process of
    compilation is safe when done by mature compilers, although they are not free of errors and
    eventually bugs are found. Expanding this affirmation to B translators demands caution, since
    they are not used such as compilers that have more market time. Software testing may solve and
    be used to perform the analyses of the translated code. Through coverage criteria is possible
    to infer the level of quality of a software and detect bugs. To achieve the coverage check and
    test the software is hard and time-consuming, mainly if done manually. To adress this demand,
    the BTestBox tool aims to analyze, automatically, the coverage reached for B implementations
    built through Atelier-B. BTestBox also automatically tests the translation from B implementations.
    For this, BTestBox uses the same test case generated to verify the coverage and compare
    the output expected values with the values found in the translation. This process made by
    BTestBox is fully automatic and may be used from Atelier-B interface through a plugin with
    easy interface.
    This thesis proposal presents the tool BTestBox. The tool is the implementation of the
    ideas proposed in the previous paragraph. BTestBox was tested with small B implementations
    with all possible elements from B language. BTestBox presents various functionalities and
    advantages to developers that use the B-Method.

4
  • RENAN DE OLIVEIRA SILVA
  • A Proposal for a Process for Deploying Open Data in Brazilian Public Institutions

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • GIBEON SOARES DE AQUINO JUNIOR
  • VANILSON ANDRÉ DE ARRUDA BURÉGIO
  • Data: Feb 20, 2018


  • Show Abstract
  • Open Data initiative has been gaining strength in recent times, with increasing participation of public institutions. However, there are still many challenges that need to be overcome when deciding to open data. This negatively affects the quality and effectiveness of publications. Therefore, the objective of this work is to establish a process that help brazilian public institutions to open their data, systematizing the necessary tasks and phases. For this, we carried out a systematic mapping of the literature, in order to discover strategies, best practices, challenges and difficulties that exist in the field.

5
  • FRED DE CASTRO SANTOS
  •  

    A mechanism to evaluate context-free queries inspired in LR(1) parsers over graph databases


  • Advisor : UMBERTO SOUZA DA COSTA
  • COMMITTEE MEMBERS :
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MARIZA ANDRADE DA SILVA BIGONHA
  • MARTIN ALEJANDRO MUSICANTE
  • SERGIO QUEIROZ DE MEDEIROS
  • UMBERTO SOUZA DA COSTA
  • Data: Feb 23, 2018


  • Show Abstract
  • The World Wide Web is an always increasing collection of information. This information is spread among different documents, which are made available by using the Hypertext Transfer Protocol (HTTP). Even though this information is accessible to users in the form of news articles, audio broadcasts, images and videos, software agents often cannot classify it. The lack of semantic information about these documents in a machine readable format often causes the analysis to be inaccurate. A significant number of entities have adopted Linked Data as a way to add semantic information to their data, not just publishing it on the Web. The result is a global data collection, called the Web of Data, which forms a global graph, consisting of Resource Description Framework (RDF) statements from numerous sources, covering all sorts of topics. To be able to find specific information in this graph, queries are performed by starting at a subject and analyzing its predicates in the RDF statements. Given that a trace is a list of predicates in an information path, one can tell there is a connection between one subject and one object if there is a trace between them in the RDF statements.

    The use of HTTP as a standardized data access mechanism and RDF as a standard data model simplifies the data access, but accessing heterogeneous data on distinct loca- tions can have an increased time complexity and current query languages have a reduced query expressiveness, which motivates us to research alternatives in how this data is queried. This reduced expressiveness happens because most query languages reside in the Regular Languages class. In this work, we introduce some of the concepts needed for better understanding the given problems and how to solve them. We analyze some works related to our research and propose to use Deterministic Context-Free Grammars instead of Regular languages to increase the expressiveness of the graph database queries. More specifically, applying the LR(1) parsing method to find paths in an RDF graph database. Lastly, we analyze our algorithm’s complexity and make some experiments, comparing our solution to other proposals, and show that ours can have better performance in given scenarios.

6
  • CIRO MORAIS MEDEIROS
  • Top-Down Evaluation of Context-Free Path Queries in Graph Databases

  • Advisor : MARTIN ALEJANDRO MUSICANTE
  • COMMITTEE MEMBERS :
  • MARTIN ALEJANDRO MUSICANTE
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • UMBERTO SOUZA DA COSTA
  • SERGIO QUEIROZ DE MEDEIROS
  • MARIZA ANDRADE DA SILVA BIGONHA
  • Data: Feb 23, 2018


  • Show Abstract
  • The internet has enabled the creation of an immense global data space, that can be accessed in the form of web pages.
    However, web pages are ideal for presenting content to human beings, but not to be interpreted by machines.
    In addition, it becomes difficult to relate the information stored in the databases behind these pages.
    From this came the Linked Data, a set of good practices for relating and publishing data data.

    The standard format recommended by Linked Data for storing and publishing related data is RDF.
    This format uses triples in the form (subject, predicate, object) to stabilish relationships between the data.
    A triplestore can be easily visualized as a graph, so queries are made by defining paths in the graph.
    SPARQL, the standard query language for RDF graphs, supports the definition of paths using regular expressions.
    However, regular expressions have reduced expressiveness, insufficient for some desirable queries.
    In order to overcome this problem, some studies have proposed the use of context-free grammars to define the paths.

    We present an algorithm for evaluating context-free path queries in graphs inspired by top-down parsing techniques.
    Given a graph and a query defined over a context-free grammar, our algorithm identifies pairs of vertices linked by paths that form words of the language generated by the grammar.
    We show that our algorithm is correct and demonstrate other important properties of it.
    It presents cubic worst-case runtime complexity in terms of the number of vertices in the graph.
    We implemented the proposed algorithm and evaluated its performance with RDF databases and synthetic graphs to confirm its efficiency.

7
  • ANDERSON PABLO NASCIMENTO DA SILVA
  • A Monitoring Platform for Heart Arrhythmia in Real-time Flows

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • FERNANDO ANTONIO MOTA TRINTA
  • GIBEON SOARES DE AQUINO JUNIOR
  • JOAO CARLOS XAVIER JUNIOR
  • THAIS VASCONCELOS BATISTA
  • Data: Feb 27, 2018


  • Show Abstract
  • In the last decade, there has been a rapid growth in the ability of computer systems to collect and carry large amounts of data. Scientists and engineers who collect this data have often turned to machine learning to nd solutions to the problem of turning that data into information. For example, in various medical devices, such as the availability of health monitoring systems, drug boxes with sensors embedded in them that allow you to collect raw data, store and analyze, and through the analysis you can get insights and decisions on such data sets. With the use of health applications based on machine learning, there is an opportunity to improve the quality and efficiency of medical care and, consequently, improve the wellness of patients. Thus, this work has as general objective the construction of an intelligent cardiac arrhythmia monitoring platform that allows monitoring, identifying and alerting health professionals, patients and relatives in real time about the hospitalized patient's health. The architecture and implementation of the platform were based on the Weka API and, as part of this work, a proof of concept of the use of the platform involving modules and applications developed in Java was implemented.

8
  • ALTAIR BRANDÃO MENDES
  • Mandala - SoS-based interoperability in smart cities

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • ELISA YUMI NAKAGAWA
  • FREDERICO ARAUJO DA SILVA LOPES
  • GIBEON SOARES DE AQUINO JUNIOR
  • THAIS VASCONCELOS BATISTA
  • Data: Feb 28, 2018


  • Show Abstract
  • Currently, the cities depend considerable on information systems. A large part of these systems, regardless of the form of public or private management, was developed using technologies and concepts that are now considered outdated. Moreover, because they were not designed to communicate with other systems in an interoperable way, many of these systems in the cities are isolated and non-standardized solutions. In contrast, the dynamism demanded by companies, government and, mainly, population presupposes the union of these systems, working in an integrated and interoperable way. This interoperability is critical to achieving the efficiency and effectiveness of the use of expected resources in an smart city. Furthermore, the union between these systems can bring previously unimaginable results, when compared to the results acquired by each isolated system. These characteristics refer to the concept of System of Systems, which is a set of complex, independent, heterogeneous systems that have their own purposes and collaborate with others to achieve common goals. The interaction between different systems made possible by a SoS is more than the sum of the systems involved, since it allows a SoS to offer new functionalities that are not provided by any of the systems operating alone. Based on the above mentioned characteristics, this paper proposes Mandala, a SoS-centric middleware that enables interoperability between information systems in smart cities. The goal is to make the heterogeneity of the systems involved transparent, providing an environment of integration and interoperation of information systems.

9
  • FÁBIO PHILLIP ROCHA MARQUES
  • Dos Alfabetos ao Exame de Proficiência: Revisão Sistemática de Aplicativos para Ensino e Revisão da Língua Japonesa

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • ANDRE MAURICIO CUNHA CAMPOS
  • LEONARDO CUNHA DE MIRANDA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROMMEL WLADIMIR DE LIMA
  • Data: May 28, 2018


  • Show Abstract
  • Japanese is a language with writing, vocabulary, grammar and pronunciation quite different from western languages, because it contains three alphabets (with two syllabic alphabets and the third one logographic), contains vocabulary, orthography and phonology built upon different nations and even has a grammar with many rules and forms, which may even differ according with the degree of formality between the listener and the speaker. Therefore, studying Japanese requires a lot of dedication and practice. To support the study of the language, more than 3100 applications are available in virtual stores with the intention of supporting students in learning and revising the Japanese alphabet, vocabulary, grammar and listening comprehension, as well as preparing for the Japanese Language Proficiency Test (JLPT). However, little has been investigated about the contents, teaching and reviewing methodology and technological features of these applications. This research aims to systematically review applications focused on supporting Japanese language study, based on a proposed framework for qualitative and quantitative review of language learning software. An individual evaluation is executed for each part of the language, starting with the alphabet, proceeding with vocabulary, grammar and listening comprehension, in order to study the applications of each component of the Japanese language; and finishing with the analysis of applications geared towards JLPT preparation, since there are applications with content and presentation adjusted specifically for the exam. Research findings are presented and include details of the main features of applications in the current scenario, a classification and comparison of the most recommended applications for the Android and iOS mobile platforms, comparison between Android and iOS platform apps in relation to the support provided to the studies and a study of features that do not usually appear in current applications but are very important for helping study Japanese nonetheless.

10
  • ISLAME FELIPE DA COSTA FERNANDES
  • Hybrid Metaheuristics Applied to the Multi-objective Spanning Tree Problem

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • THATIANA CUNHA NAVARRO DE SOUZA
  • Data: Jul 6, 2018


  • Show Abstract
  • The Multi-objective Spanning Tree Problem (MSTP) is an NP-hard extension of the Minimum Spanning Tree (MST). Once the MTSP models several real-world problems in which conflicting objectives need to be optimized simultaneously, it has been extensively studied in the literature and several exact and heuristic algorithms were proposed for it. Besides, over the last years, researchs have showed the considerable performance of algorithms that combine various metaheuristic strategies. They are called hybrid algorithms and previous works successfully applied them to several optimization problems. In this work, five new hybrid algorithms are proposed for two versions of the MSTP: three for the bi-objective version (BiST) based on Pareto dominance and two for the many-objective version based on the ordered weighted average operator (OWA-ST). This research hybridized elements from various metaheuristics. Computational experiments investigated the potential of the new algorithms concerning computational time and solution quality. The results were compared to the state-of-the-art.

11
  • JÉSSICA LAÍSA DIAS DA SILVA
  • Game Design of Computational Thinking Games inspired by the Bebras Challenge Evaluation Instrument

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • JACQUES DUÍLIO BRANCHER
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • Data: Jul 25, 2018


  • Show Abstract
  • Several skills are required in this century, among them, computer-related skills. As Blinkstein (2008) states, the list of skills required for this century is quite extensive. However, he emphasizes computational thinking as being one of the most significant as well as least understood. Computational Thinking (PC) can be defined as a problem-solving process that encompasses concepts, skills, and practices in Computer Science. Among the international effort to disseminate Computational Thinking we have highlighted the Bebras Test. The main goal of the Test is to motivate primary and secondary school students as well as the general public to become interested in computing and the PC. The teaching of Computational Thinking is important if it is widespread, but it is observed that much is still lacking in the digital games to work the proposed skills for the teaching and learning of the PC. Thus the present work aims to investigate the quality of Game Design of educational games created from questions of the Bebras Challenge Test.

12
  • WENDELL OLIVEIRA DE ARAÚJO
  • Procedural Content Generation for Creating Levels of Educational Games

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • JACQUES DUÍLIO BRANCHER
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • Data: Jul 25, 2018


  • Show Abstract
  • Educational digital games has been an area of research that has increased with the years throughout the international scene. This fact has been due to the potential that the games have of fun, immersion and stimulation to the learning of natural and personalized way. Thus, one of the great challenges found in this area is the creation of games that meet the contents proposed to be taught or practiced. In this sense, the procedural content generation has emerged as an area that can assist the development of educational games. The procedural content generation (PCG) deals with the automatic creation of contents such as textures, sounds, objects and, in the context of this work, the level’s generation. Thus, PCG contributes to the creation of new levels without the need for human intervention. With this, the research seeks to make use of PCG technique to create levels of educational games that require the player to achieve certain pedagogical goals throughout the game. For this, we propose a generation approach in three stages: (i) generation of the basic structure of the level (ex: only floors and walls); (ii) The generation of elements related to the pedagogical objectives of the level; (iii) complete the remainder of the level with enemies and other scenario elements. In this way, it can be used to create different challenges and scenarios so that a student can practice certain content, since whenever a challenge is completed, a new challenge can be generated for the student. In this way, this approach will be investigated using the grammar-based PCG technique. Therefore, we seek to verify if the technique in conjunction with the proposed approach assists in the generation of contents and creates them effectively, evaluating their quality and functionalities with elementary students.


13
  • RAFAEL FERREIRA TOLEDO
  • Recovery Mechanism Based on a Rewriting Process for Web Service Compositions

  • Advisor : UMBERTO SOUZA DA COSTA
  • COMMITTEE MEMBERS :
  • GENOVEVA VARGAS-SOLAR
  • MARTIN ALEJANDRO MUSICANTE
  • UMBERTO SOUZA DA COSTA
  • Data: Jul 26, 2018


  • Show Abstract
  • Web service compositions are exposed to a wide variety of failures. The service components re- motely located can represent potential problems due to the means of connectivity necessary for communication or because of changes implemented by their respective provider during system updates. Those problems represent unexpected events that compromise the correctness and availability of a given service composition. This dissertation presents an approach to improve the robustness of Web service compositions by recovering from failures occurred at different moments of their execution. We first present a taxonomy of failures as an overview of previous research works on the topic of fault recovery of service compositions. The resulting classifica- tion is used to propose our self-healing method for Web service orchestrations. The proposed method, based on the refinement process of compositions, takes user preferences into account to generate the best possible recovering compositions. To validate our approach, we produced a prototype implementation capable of simulating and analyzing different scenarios of faults. For that matter, our work introduces algorithms for generating synthetic compositions and Web services. In this setting, both the recovery time and the user preference degradation are investigated under different strategies, namely local, partial or total recovery. These strategies represent different levels of intervention on the composition.
14
  • GABRIEL DE ALMEIDA ARAÚJO
  • Interactive Platform of Velocity Analysis on Seismic Data

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • MONICA MAGALHAES PEREIRA
  • CARLOS CESAR NASCIMENTO DA SILVA
  • ARMANDO LOPES FARIAS
  • Data: Jul 27, 2018


  • Show Abstract
  • With the advancement of hydrocarbon exploration, the oil industry has been searching for ways to minimize exploratory risks, with one of these ways being the improvement of the used tools. There are three steps in this exploration: the seismic data acquisition, the seismic processing and seismic interpretation. This work is part of the seismic processing, more specifically of one of its stages, the seismic velocity analysis, which aims to find the seismic velocity field that offers reliable earth subsurface models through known algorithms of velocity analysis. One of the objectives of this work is the creation of tools to facilitate this velocity analysis by implementing these algorithms so that they work integrated in a single platform of analysis. Another point that this advance brought, was the considerable increase in the volume of seismic data acquired, which led to an increasing need of computer processing power. Given this need, we present a methodology for velocity analysis using GPUs and its results, showing the viability of using it to accelerate Geophysics algorithms, particularly algorithms for velocity analysis. Finally, case studies will be presented, showing the performance results of the algorithms in CPU and GPU versions.

15
  • FÁBIO ANDREWS ROCHA MARQUES
  • Development and Evaluation of Nihongo Kotoba Shiken: A Computerized Exam for the Japanese Language

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • ANDRE MAURICIO CUNHA CAMPOS
  • LEONARDO CUNHA DE MIRANDA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROMMEL WLADIMIR DE LIMA
  • Data: Jul 27, 2018


  • Show Abstract
  • The study of foreign languages involves the constant elaboration, application and correction of exams. In this context, the use of computerized tests has facilitated these tasks, but there are some limitations. By conducting studies in the areas of foreign language knowledge assessment and automation of this activity, the present research aims to develop a method to automate knowledge assessment in the Japanese language that does not require full interaction with a professional teacher of the language and which is not limited to a fixed content of the language, i.e. the content of the test must be modifiable. This work will present the research stages about the study and evaluation of Japanese language knowledge through the technology, the design of the evaluation methodology used in the exam, the flow of execution and characteristics of the Nihongo Kotoba Shiken, and assessments with a professional of the language and some Japanese language learning classes.

16
  • GABRIELA OLIVEIRA DA TRINDADE
  • Visualization of Traceability in Agile Projects through Data contained in Tools of Support to the Management of Projects


  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • GILBERTO AMADO DE AZEVEDO CYSNEIROS FILHO
  • LYRENE FERNANDES DA SILVA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • Data: Jul 27, 2018


  • Show Abstract
  • Software traceability, known for the relationship between any software engineering artifacts, brings great advantages to the development process. The information it provides helps in decision making in the face of a change, better understanding of the artifact, reusability, maintenance, forecasting of costs and deadlines, among others. With environments increasingly adept at agile methodologies, with client side-by-side giving constant feedbacks, a adequacy to these requested changes has been a common practice during system development. And, in order for changes to be made safely, traceability information helps in making decisions, with a goal in which the change does not bring inconsistencies, introduce errors, and generate system failures.
    Some project management tools support traceability elements. However, with several data that can be provided with such a practice, it is difficult to interpret them, especially when they are presented only textually. Knowing that the visualization of information brings the possibility of an analysis with large volumes of data in a fast and clear way, offering a safer decision making and, allowing to discover information previously unseen, it is possible to verify in the literature techniques of visualization of information of traceability. However, such techniques require information in addition to these data, need to consider the pillars of information exposed in the academy (problematic, what, when and who to view) to have an adequate visualization.
    With this purpose, this work performs interviews in the industry to respond to the pillars of information considered in the proposal of a visualization. Then, an analysis based on Grounded Theory is done on the data collected. Then, in the assembled context of traceability, defined profiles, needs and problems described, and artifacts generated in agile environments, the existing information visualizations are studied in the bibliography.
    As a result, a discussion and suggestion of appropriate visualization for traceability information are made based on the suggestions in the literature and data collected from the interview. Later, with Heuristics created, an evaluation of the project management tools that integrate the platform of hosting and versioning of data Github is made, to see if they provide the noticed visualization of traceability information.

17
  • FRANCISCO GENIVAN SILVA
  • Analysis of Student Behavior in Video Lessons

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • ISABEL DILLMANN NUNES
  • FABIANO AZEVEDO DORÇA
  • Data: Jul 27, 2018


  • Show Abstract
  • Distance Education and the use of e-learning systems contribute to the great
    generation of educational data. Therefore, the use of databases and the storage of execution
    logs make the data more easily accessible and suitable for investigation of educational
    processes. Methodologies for automatic extraction of useful information from large volumes
    of data, especially data mining, have significantly contributed to improvements in the field of
    education. However, most traditional methods are focused solely on the data or how they are
    structured, with no major concern with the educational process as a whole. In addition, little
    attention has been paid to data on student behavior during resource use and educational
    media. Video lessons have been used as a significant part of several courses offered,
    demonstrating that the culture of video is increasingly disseminated and is part of students'
    daily lives. Therefore, we understand that analyzing the behavior of students during the
    execution of the videos can contribute to a more accurate evaluation of the quality of the
    subjects addressed and the way they were worked. Thus, this master's work consisted of
    carrying out studies conducted in order to investigate the way students behave during the use
    of video lessons to propose an approach to evaluate this resource. The evaluation of video
    lessons occurs through a process that involves extracting information from log files and
    modeling actions through process mining. The initial results demonstrate that the number of
    views, the time spent and the time of drop out of the video are variables that have great
    capacity to offer useful information about the students' learning. This demonstrates that
    evaluating the educational resource through the analysis of its actions can contribute
    substantially in the educational area, benefiting the treatment of issues such as the
    identification of bottlenecks in the learning process and the anticipation of problems,
    especially in distance education. The results obtained during the first studies using Process
    Mining in experimental data provided greater clarity about students' behavior during video
    lessons, giving the necessary direction for the actions to be taken by teachers or content
    producers. In view of this, the work brings contributions to the improvement of key aspects of
    videotapes from a multidisciplinary approach, directly helping educators and managers to
    promote a more complete educational formation based on resources with better quality.

18
  • DANNYLO JOHNATHAN BERNARDINO EGÍDIO
  • A framework proposal to facilitate the development of IoT-based applications

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • GIBEON SOARES DE AQUINO JUNIOR
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • DIEGO RODRIGO CABRAL SILVA
  • KIEV SANTOS DA GAMA
  • Data: Jul 30, 2018


  • Show Abstract
  • Recent years have been marked by a growing advance in embedded computing, sensoring

    technologies and connected devices. Such an advance has had a signicant and expressive

    impact on innovative paradigms such as the Internet of Things (IoT), which believes that

    intelligent objects capable of connecting in the network can cooperate among each other

    to achieve a common goal. Such growth has leveraged supplier initiatives to produce

    protocols and communication standards that would enable such cooperation, however,

    the considerable diversity of devices and consequently protocols that have emerged have

    made this process dicult, creating numerous challenges, including heterogeneity and

    interoperability. These challenges have made the IoT application development process a

    complex and costly task, since the capabilities of these protocols and standards aimed at

    discovering the devices on the network, communication among them, have become quite

    specic for each device, forcing the developer to create complex integration strategies to

    deal with this limitation. In this way, this work proposes a textit framework that will seek

    to simplify the process of development of IoT applications through device virtualization, so

    that heterogeneous aspects connected to devices will be abstracted by this virtualization,

    and common operations of protocols such as discovery of devices and communication with

    them will be abstracted through a common interface between them, integrating them and

    reducing the impacts of the heterogeneous characteristics.

19
  • ERITON DE BARROS FARIAS
  • Recommendations Catalog to Suport Agile Adoption or Transformation

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • MARILIA ARANHA FREIRE
  • Data: Jul 30, 2018


  • Show Abstract
  • The number of studies on agile methods has increased in the academy. Agile software development has a significant positive impact on the performance of the development teams, software quality and users' satisfaction. Thus, among other topics, Agile Adoption and Transformation are two of the most relevant themes in the main events about agile. Many teams that work with agile development report that they miss a tutorial or document, in which it is possible to find solutions to help agile teams carry out processes of Agile Transformation or Adoption easily. Therefore, this work has the objective of analyzing and categorizing information that can assist teams in these processes. The result of this analysis was organized in a catalog called Recommendations Catalog to Assist Agile Adoption or Transformation.

20
  • VINÍCIUS ARAÚJO PETCH
  • Profitable Tour Problem with Passengers and Time Constraints (PTP-TR)

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Aug 6, 2018


  • Show Abstract
  • This paper seeks to model and examine solutions to the Profitable Tour Problem with Passengers and Time Constraints (PTP-TR). The work proposes a mathematical model for the problem, exact solution algorithm and metaheuristics for the solution approximation. In order to operationalize the computational experiment necessary to the present research and because it is a model not described in the literature, test instances were also created. The work performs a computational experiment to evaluate the performance of mathematical modeling and delineate the ability to approximate metaheuristic algorithms for the problem. Finally, it describes the schedule for the masters defense and how the problem can be developed in future works.

21
  • LUCAS MARIANO GALDINO DE ALMEIDA
  • Mining Exceptional Interfaces based on GitHub: An Exploratory Study

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • UIRA KULESZA
  • EIJI ADACHI MEDEIROS BARBOSA
  • MARCELO DE ALMEIDA MAIA
  • Data: Aug 14, 2018


  • Show Abstract
  • Uncaught exceptions are not an exceptional scenario in current applications. The uncaught exceptions are estimated to account for two thirds of system crashes. Such exceptions can be thrown on the application itself, by the underlying system or hardware, or even by a reused API. More often than not, the documentation about the runtime exceptions signaled by API methods are absent or incomplete. As a consequence, the developer usually discovers about such exceptions when they happen in production environment - leading to application crashes. This work reports an exploratory study that mined the exception stack traces embedded on GitHub issues to discover the undocumented exception interfaces of API methods. Overall the issues of 2.970 java projects hosted in GitHub were mined and 66.118  stack traces were extracted. Hence, a set of top maven APIs where investigated using this stack traces data set, and undocumented exception interfaces could be discovered. The results of the mining study show that the information embedded on issues can indeed be used to discover undocumented exceptions thrown by API methods.


22
  • JOÃO CARLOS EPIFANIO DA SILVA
  • Investigation of Engineering Requirements Education from the Academy and Industry Perspective: Focus on Context Interpretation and Requirements Writing

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • LYRENE FERNANDES DA SILVA
  • ISABEL DILLMANN NUNES
  • MARIA LENCASTRE PINHEIRO DE MENEZES E CRUZ
  • Data: Aug 15, 2018


  • Show Abstract
  • In the literature, many problems are pointed out regarding the process of Requirements Engineering. Recent research shows that software development environments face many challenges ranging from requirement elicitation to validation. The challenges listed in the literature are part of topics within the academy in the lecture of Requirements Engineering. Those challenges impact on product quality and may compromise the continuity of a project. Therefore, we believe that maybe there is a deficit in the teaching of the lecture that impacts on the industry, besides a possible lack of parallelism in both contexts. Concerning that scenario, this work lists methodologies and activities that change the traditional method of teaching related to Requirements Engineering. The activities focus on interpreting solutions and writing requirements. For that, it was necessary to perform a systematic review of the literature in order to identify how the lecture is taught. Besides that, we did a survey directed to professors and industry aiming to identify the state of the lecture and difficulties within the area in the country. It was verified that professors and industry face many challenges. The industry challenges may be a consequence of academy teaching. It is necessary to get to know the challenges before they impact on the job market, which means that they need to be identified already in the academy.  From the results that we got, it was concluded that, indeed, it is essential to overcome the challenges presented still in the academy. There is Also a need for more practical activities and new approaches in the classroom. On the other hand, in the industry, we recommend that they collaborate with the academy. In this way, once the industry demands are identified, the academy can provide, for the future professionals, a formation based on expected skills.


23
  • JEFFERSON IGOR DUARTE SILVA
  • An AI based Tool for Networks-on-Chip Design Space Exploration 

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • DEBORA DA SILVA MOTTA MATOS
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • Data: Aug 29, 2018


  • Show Abstract
  • With the increasing number of cores in Systems on Chip (SoCs), bus architectures have suffer some limitations regarding performance. As applications demand more bandwidth and lower latencies, busses could not comply with such requirements due to longer wires and increased capacitancies. Facing this scenario, Networks-on-Chip (NoCs) emerged as a way to overcome limitations found in bus-based systems. NoCs are composed of a set routers and communication links. Each component has its own characteristics. Fully ex- ploring all possible NoC characteristics settings is unfeasible due to the huge design space to cover. Therefore, some methods to speed up this process are needed. In this work we propose the usage of Artificial Intelligence techniques to optimize NoC architectures. This is accomplished by developing an AI based tool to explore the design space in terms of latency prediction for different NoC components configuration. Up to now, nine classifiers were evaluated. To evaluate the tool tests were performed on Audio/Video applications with two traffic patterns, Perfect Shuffle and Matrix Transpose, with four different com- munication requirements. The preliminaries results show an accuracy up to 85% using a Decision Tree to predict latency values. 

24
  • JHOSEPH KELVIN LOPES DE JESUS
  • Information Theory Approaches for Automated Feature Selction

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • DANIEL SABINO AMORIM DE ARAUJO
  • ANDRÉ CARLOS PONCE DE LEON FERREIRA DE CARVALHO
  • Data: Sep 21, 2018


  • Show Abstract
  • One of the main problems of machine learning algorithms is the dimensionality problem. With the rapid growth of complex data in real-world scenarios, attribute selection becomes a mandatory pre-processing step in any application to reduce data complexity and computational time. Based on this, several works were produced to develop efficient methods to accomplish this task. Most attribute selection methods select the best attributes based on some specific criteria. In addition, recent studies have successfully constructed models to select attributes considering the particularities of the data, assuming that similar samples should be treated separately. Although some progress has been made, a poor choice of a single algorithm or criterion to assess the importance of attributes, and the arbitrary choice of attribute numbers made by the user can lead to poor analysis. In order to overcome some of these issues, this paper presents the development of some two strands of automated attribute selection approaches. The first are fusion methods of multiple attribute selection algorithms, which use ranking-based strategies and classifier committees to combine attribute selection algorithms in terms of data (Data Fusion) and decision (Fusion Decision) algorithms, allowing researchers to consider different perspectives in the attribute selection step. The second method (PF-DFS) brings an improvement of a dynamic selection algorithm (DFS) using the idea of Pareto frontier multiobjective optimization, which allows us to consider different perspectives of the relevance of the attributes and to automatically define the number of attributes to select . The proposed approaches were tested using more than 15 actual and artificial databases and the results showed that when compared to individual selection methods such as the original DFS itself, the performance of one of the proposed methods is notably higher. In fact, the results are promising since the proposed approaches have also achieved superior performance when compared to established dimensionality reduction methods, and by using the original data sets, showing that the reduction of noisy and / or redundant attributes may have a positive effect on the performance of classification tasks.

25
  • SAMUEL DA SILVA OLIVEIRA
  • Optimization of Irregular NoC Topology for Real-Time and Non-Real-Time Applications in Networks-on-Chip based MP-SoCs.

  • Advisor : MARCIO EDUARDO KREUTZ
  • COMMITTEE MEMBERS :
  • MARCIO EDUARDO KREUTZ
  • MONICA MAGALHAES PEREIRA
  • GUSTAVO GIRAO BARRETO DA SILVA
  • ALISSON VASCONCELOS DE BRITO
  • Data: Dec 7, 2018


  • Show Abstract
  • With the evolution of multiprocessing architectures, Networks-on-Chip (NoCs) have become a viable solution for the communication subsystem. Since there are many possible architectural implementations, some use regular topologies, which are more common and easier to design. Others however, follow irregularities in the communication pattern, turning into irregular topologies. A good design space exploration can give us the configuration with better performance among all architectural possibilities. This work proposes a network with optimized irregular topology, where the communication is based on routing tables and a tool that seeks to perform this exploration through a Genetic Algorithm. The network proposed in this work presents heterogeneous routers (which can help with network optimization) and supports real-time and non real- time packets. The goal of this work is to find a network (or a set of networks), through the design space exploration, that has the best average latency and the highest percentage of packets that meet their deadlines.

26
  • SAMUEL DE MEDEIROS QUEIROZ
  • INFRASTRUCTURE AS A SERVICE INTRA-PLATFORM INTEROPERABILITY: An Exploratory Study with OpenStack

     


  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • ANDREY ELÍSIO MONTEIRO BRITO
  • JACQUES PHILIPPE SAUVÉ
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • Data: Dec 10, 2018


  • Show Abstract
  • The emergence of new digital technologies come with challenging technical and business requirements. The traditional approach to provide computational infrastructure to application workloads, which relies on in-house management of hardware, does not present technical and cost-effective attributes to deliver high-performance, reliability and scalability. As the biggest technologic paradigm shift in the history of humanity, cloud computing allows diverse deployment and service model alternatives, suitable to diverse requirements, such as security, latency, computational performance, availability and cost. Therefore, numerous companies distribute thousands of clouds worldwide, creating an equitable market through competition, where players create unique features to differentiate from competitors. Consequently, in the consumer side, picking a vendor tipically translates into vendor lock-in, a situation where the applications heavily depend on the vendor’s approach of exposing features, making it difficult to switch between vendors whenever convenient or to support complex scenarios across multiple distributed heterogeneous clouds, such as federation. An immediate work-around for users is to pick cloud solutions that implement standards or post-facto open source platforms, such as OpenStack, which are assumed to provide native interoperability between installations. In the industry, however, OpenStack proves that the lack of interoperability is a real concern even between its deployments, due the high flexibility and complexity of supported use cases. Therefore, this investigation documents intra-platform interoperability, as in OpenStack, presenting in detail the Python client library created by the community to abstract deployment differences, counting with numerous and significant contributions from the author. Afterwards, an extensive validation of that library is performed across one testing and five production clouds from different vendors worldwide, because despite the fact the library is extensively used by the community, it had never been formally validated. The validation unveiled bugs, functionality and documentation gaps. Since the OpenStack intra-platform interoperability had never been documented in the literature, a systematic literature review followed, allowing a deep comparison of the state of the art of vendor lock-in taxonomy and approaches in opposition to that library, presenting its advantages, disadvantages and recommendations for users. Lastly, the suggestions for future work include support for multiple programming languages and the adoption of the client library as a standard for inter-platform interoperability.

27
  • ALLAN VILAR DE CARVALHO
  • The Problem of the Traveling Salesman with Multiple Passengers and Quota

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Dec 14, 2018


  • Show Abstract
  • This scientific work presents the Traveling Salesman Problem with Multiple Passengers and Quota, variant of the Traveling Salesman Problem with Quota, which is to generate a route for the salesman the driver of the vehicle, which can share the car seats with passengers who request rides in the localities of your route. Every passenger on board is obliged to participate in the apportionment of the costs of the sections of route of the salesman, who is in the vehicle. A mathematical model, an instance bank and a set of resolution methods composed of an exact one, an ad hoc heuristic and seven metaheuristics are proposed for the problem. The results of the exact method for the instances with 10 and 20 localities are reported, and quantitative and qualitative analyzes of computational experiments comparing methods of resolution between them are presented.

Thesis
1
  • SAMUEL LINCOLN MAGALHÃES BARROCAS
  • A Strategy to verify the code generation from Circus to Java

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MARTIN ALEJANDRO MUSICANTE
  • UMBERTO SOUZA DA COSTA
  • ALEXANDRE CABRAL MOTA
  • BRUNO EMERSON GURGEL GOMES
  • Data: Feb 22, 2018


  • Show Abstract
  • The use of Automatic Code Generators for Formal Methods not only minimizes efforts on the implementation of Software Systems, as also reduces the chance of existing errors on the execution of such Systems. These tools, however, can themselves have faults on their source codes that causes errors on the generation of Software Systems, and thus verification of such tools is encouraged. This PhD thesis aims at creating and developing a strategy to verify JCircus, an automatic code generator from a large subset of Circus to Java. The interest in Circus comes from the fact that it allows the specification of concurrent and state-rich aspects of a System in a straightforward manner. The strategy of verification consists on the following steps: (1) extension of the existing operational semantics to Circus and proof that it is sound with respect to the existing denotational semantics of circus in the Unifying Theories of Programming (UTP), a framework that allows proof and unification of different theories; (2) development and implementation of a strategy that refinement-checks the generated code by JCircus, through a toolchain that encompasses a Labelled Predicate Transition System (LPTS) Generator for Circus and a Model Generator that inputs this LPTS and generates an Oracle that uses the Java Pathfinder code model-checker that refinement-checks the generated code by JCircus. Combined with coverage-based testing techniques, we envisage improving the reliability of the Code Generation from Circus to Java.

2
  • ROMERITO CAMPOS DE ANDRADE
  • Multicasting Routing in Multisession: Models and Algorithms.

     

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • PAULO HENRIQUE ASCONAVIETA DA SILVA
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: May 14, 2018


  • Show Abstract
  • Multicast Technology has been studied over the last two decades and It has shown to be a good approach to save network resources. Many approaches have been considered to solve the multicast routing problem considering only one session and one source to attending session‘s demand, as well, multiple sessions with more than one source per session. In this thesis, the multicast routing problem is explored taking in consideration the modelsand the algorithms designed to solve it when where multiple sessions and sources. Two new models are proposed with different focuses. First, a mono-objective model optimizing residual capacity, Z, of the network subject to a budget is designed and the objective is to maximize Z. Second, a multi-objective model is designed with three objective functions: cost, Z and hops counting. Both models consider multisession scenario with one source per session. Besides, a third model is examined. This model was designed to optimize Z in a scenario with multiple sessions with support to more than one source per session. An experimental analysis was realized over the models considered. For each model, a set of algorithms were designed. First, an Ant Colony Optimization, a Genetic algorithm, a GRASP and an ILS algorithm were designed to solve the mono-objective model – optimizing Z subject to a budget. Second, a set of algorithm were designed to solve the multi-objective model. The classical approaches were used: NSGA2, ssNSGA2, SMS-EMOA, GDE3 and MOEA/D. In addition, a transgenetic algorithm was designed to solve the problem and it was compared against the classical approaches. This algorithm considers the use of subpopulations during the evolution. Each subpopulation is based on a solution construction operator guided by one of the objective functions. Some solutions are considered as elite solutions and they are considered to be improved by a transposon operator. Eight versions of the transgenetic algorithm were evaluated. Third, an algorithm was designed to solve the problem with multiple sessions and multiple sources per sessions. This algorithm is based on Voronoi Diagrams and it is called MMVD. The algorithm designed were evaluated on large experimental analysis. The sample generated by each algorithm on the instances were evaluated based on non-parametric statistical tests. The analysis performed indicates that ILS and Genetic algorithm have outperformed the Ant Colony Optimization and GRASP. The comparison between ILS and Genetic has shown that ILS has better processing time performance. In the multi-objective scenario, the version of Transgenetic called cross0 has shown to be statistically better than the other algorithms in most of the instances based on the hypervolume and addictive/multiplicative epsilon quality indicators. Finally, the MMVD algorithm has shown to be better than the algorithm from literature based on the experimental analysis performed for the model with multiple session and multiple sources per session.

3
  • ANTONIO DIEGO SILVA FARIAS
  • Generalized OWA functions

  • Advisor : REGIVAN HUGO NUNES SANTIAGO
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • EDUARDO SILVA PALMEIRA
  • REGIVAN HUGO NUNES SANTIAGO
  • RONEI MARCOS DE MORAES
  • SANDRA APARECIDA SANDRI
  • Data: Jun 29, 2018


  • Show Abstract
  • In the literature it is quite common to find problems that need efficient mechanisms in accomplishing 
    the task of combining entries of the same nature in a value of the same type as the inputs. The
    aggregation functions are quite efficient in the accomplishment of this work, being able to be used,
    for example, to model the connectives of the fuzzy logic and also in problems of decision making. An
    important family of aggregations, belonging to the middle class of functions, was introduced by Yager
    in 1988, who called them ordered weighted averaging functions (OWA). These functions are a kind of
    weighted average, whose weights are not associated with the particular inputs, but their respective
    magnitudes, that is, the importance of an input is determined by their value. More recently, it has
    been found that non-aggregate class functions may also be able to combine inputs, such as pre-
    aggregations and mixture functions, which may not satisfy the mandatory monotonicity condition for
    aggregation functions. Thus, the objective of this work is to present a detailed study on aggregations
    and preaggregations, in order to provide a good theoretical basis in an area that has a wide possibility
    of applications. We present a detailed study of generalized mixing functions - GM, which extend the
    Yager OWA functions, and propose some ways to generalize the GM functions: limited generalized mixing
    functions and dynamic ordered weighted averaging functions.
4
  • EDMILSON BARBALHO CAMPOS NETO
  • Melhorando o Algoritmo SZZ para Lidar com Mudanças Semanticamente Equivalentes

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • DANIEL ALENCAR DA COSTA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • INGRID OLIVEIRA DE NUNES
  • MARCELO DE ALMEIDA MAIA
  • ROBERTA DE SOUZA COELHO
  • UIRA KULESZA
  • Data: Jul 20, 2018


  • Show Abstract
  • O algoritmo SZZ foi inicialmente proposto Sliwerski, Zimmermann e Zeller (origem da abreviação SZZ) para identificar as mudanças que introduzem erro no código. Contudo, embora bem aceito pela comunidade acadêmica, muitos pesquisadores têm reportado, ao longo dos anos, limitações associadas ao algoritmo SZZ. Por outro lado, não existe nenhum trabalho que tenha pesquisado profundamente como o SZZ é usado, estendido ou avaliado pela comunidade de engenharia de software. Além disso, poucos trabalhos têm proposto melhorias ao algoritmo SZZ. Nesse contexto, esta tese tem como objetivo revelar as existentes limitações documentadas na literatura sobre o algoritmo SZZ para melhorar o seu estado da arte, propondo soluções para algumas dessas limitações. Primeiramente, nós realizamos um mapeamento sistemático para identificar qual o estado da arte do algoritmo SZZ e explorar como ele tem sido utilizado, suas limitações, melhorias propostas e avaliações. Nós adotamos uma técnica de pesquisa existente conhecida como “snowballing” (em português, bola de neve) para conduzir estudo sistemáticos na literatura. Assim, nós partimos de dois renomados artigos e lemos todas as suas 589 citações e referências, resultando em 190 artigos a serem analisados. Nossos resultados desse estudo mostram que a maioria dos artigos usam o SZZ como base de estudos empíricos (83%), enquanto apenas poucos artigos realmente propõem melhorias diretas ao SZZ (3%) ou o avaliam (7%). Nós também observamos que o SZZ possui muitas limitações não consertadas, tais como o viés relacionado a mudanças semanticamente equivalentes, por exemplo, refatorações, que não foram endereçadas por nenhuma implementação anterior do SZZ. Posteriormente, nós conduzimos um estudo empírico para investigar a relação entre refatorações e os resultados do SZZ. Nós utilizamos para isso o RefDiff, a ferramenta de detecção de refatoração com a maior precisão reportada na literatura. Nós executamos o RefDiff tanto nas mudanças analisadas pelo SZZ como responsáveis pelo conserto dos erros (do inglês, “issue-fix changes”) como nas mudanças identificadas pelo algoritmo como que induziram ao conserto (do inglês, “fix-inducing changes”). Os resultados desse estudo indicam uma taxa de refatoração de 6,5% nas fix-inducing changes e 20% nas issue-fix changes. Além disso, nós identificamos que 39% das fix-inducing changes derivam de issue-fix changes com refatorações, logo tais mudanças não deveriam nem ter sido analisadas pelo SZZ. Esses resultados sugerem que refatorações realmente podem impactar os resultados do SZZ. Por fim, nós pretendemos evoluir este segundo estudo expandindo os tipos de refatorações identificadas, incorporando outras ferramentas de detecção de refatoração ao nosso algoritmo. Além disso, nós planejamos executar um terceiro estudo para avaliar nossa implementação melhorada do SZZ para lidar com mudanças semanticamente equivalente usando um framework de avaliação em um mesmo conjunto de dados anteriormente utilizado na literatura. Nós esperamos que os resultados dessa tese possam contribuir para a maturação do SZZ e, consequentemente, poder aproximá-lo de uma maior aceitação do algoritmo SZZ na prática.

5
  • IGOR ROSBERG DE MEDEIROS SILVA
  • BO-MAHM: A Multi-agent Architecture for Hybridization of Metaheuristics for Bi-objective Optimization

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • GIVANALDO ROCHA DE SOUZA
  • MARCO CESAR GOLDBARG
  • MYRIAM REGATTIERI DE BIASE DA SILVA DELGADO
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Aug 3, 2018


  • Show Abstract
  • Several researches have pointed the hybridization of metaheuristics as an eective way to deal with combinatorial optimization problems. Hybridization allows the combination of dierent techniques, exploiting the strengths and compensating the weakness of each of them. MAHM is a promising adaptive framework for hybridization of metaheuristics, originally designed for single objective problems. This framework is based on the concepts of Multiagent Systems and Particle Swarm Optimization. In this study we propose an extension of MAHM to the bi-objective scenario. The proposed framework is called BOMAHM. To adapt MAHM to the bi-objective context, we redene some concepts such as particle position and velocity. In this study the proposed framework is applied to the biobjective Symmetric Travelling Salesman Problem. Four methods are hybridized: PAES, GRASP, NSGA2 and Anytime-PLS. Experiments with 11 bi-objective instances were performed and the results show that BO-MAHM is able to provide better non-dominated sets in comparison to the ones obtained by algorithms existing in literature as well as hybridized versions of those algorithms proposed in this work.

6
  • DENIS FELIPE
  • MOSCA/D: Multi-objective Scientific Algorithms Based on Decomposition

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • MATHEUS DA SILVA MENEZES
  • MYRIAM REGATTIERI DE BIASE DA SILVA DELGADO
  • Data: Aug 17, 2018


  • Show Abstract
  • This work presents a multi-objective version of the Scientific Algorithms based on decomposition (MOSCA/D). Such approach is a new metaheuristic inspired by the processes
    of scientific research to solve multi-objective optimization problems. MOSCA/D uses the
    concept of theme to direct the computational effort of the search to promising regions
    of the objective space, fixing different decision variables in each iteration. A probabilistic
    model based on the TF-IDF statistic assists the choice of such variables. Computational
    experiments applied MOSCA/D to 16 instances of the multi-objective multidimensional
    knapsack problem (MOMKP) with up to 8 objectives. The results were compared to
    NSGA-II, SPEA2, MOEA/D, MEMOTS, 2PPLS, MOFPA and HMOBEDA, covering three classical multi-objective algorithms, two state of the art algorithms for the problem
    and two most recently published algorithms for the problem, respectively. Statistical tests
    showed evidence that MOSCA/D can compete with other consolidated approaches from
    literature and can now be considered the new state of the art algorithm for the MOMKP
    in instances with more than two objectives, considering the hypervolume and epsilon
    quality indicators.

7
  • JOSÉ AUGUSTO SARAIVA LUSTOSA FILHO
  • Exploring diversity and similarity as criteria in ensemble systems based on dynamic selection

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • ARAKEN DE MEDEIROS SANTOS
  • BRUNO MOTTA DE CARVALHO
  • DANIEL SABINO AMORIM DE ARAUJO
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • Data: Aug 24, 2018


  • Show Abstract
  • Pattern classification techniques can be considered the most important activitie in pattern
    recognition area where aims assing a unknown sample test to a class. Generally individual
    classifiers haven’t good recognition rates compared to multiple classifiers. Thus ensemble
    of classifiers can be used to increase the accuracy of classification systems. Ensemble
    systems provide good recognition rates when the classifiers members of the ensemble
    system have uncorrelated errors in different sub-spaces of the problem; This characteristic
    is measured by diversity measures. In this context, the present thesis explores ensemble
    systems using dynamic selection. Ao contrário de comitês que utilizam seleção estática,
    em comitês de classificadores utilizando seleção dinâmica, para cada padrão de teste
    estima-se o nível de competência de cada classificador de um conjunto inicial. Apenas
    os classificadores mais competentes são selecionados para classificar o padrão de teste.
    O presente trabalho objetiva explorar, avaliar e propor métodos para seleção dinâmica
    de classificadores baseando-se em medidas de diversidade. Unlike emseble sysetm using
    static selection, in ensembles using dynamic selection for each test pattern is estimated
    the competence level for the initial set of classifiers. Only the most relevant classifiers are
    selected to classify the test pattern. This paper aims to explorer, evaluate and propose
    methods for ensemble systems based on diversity measures. To achieve this goal, several
    ensemble systems in the literature using dynamic selection are exploited, as well as hybrid
    versions of them are proposed in order to quantify, by experiments, the influence of diversity
    measure among classifiers members in ensemble systems. Therefore the contribution of this
    work is empirically elucidate the advantages and disadvantages of using diversity measures
    in dynamic selection of classifiers.

8
  • RONILDO PINHEIRO DE ARAUJO MOURA
  • Hierarchical Clustering Ensemble preserving the T-transitivity

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • FLAVIO BEZERRA COSTA
  • ARAKEN DE MEDEIROS SANTOS
  • EDUARDO SILVA PALMEIRA
  • Data: Oct 5, 2018


  • Show Abstract
  • The main idea of ensemble learning is improved machine learning results by combining several models. Initially applied to supervised learning, this approach usually produces better results in comparison with single methods. Similarly, unsupervised ensemble learning, or consensus clustering, create individual clustering that is more robust in comparison to unique methods. The most common methods are designed for flat clustering, and show
    superior in quality to clustering unique methods. Thus, it can be expected that consensus of hierarchical clustering could also lead to higher quality in creating hierarchical clustering. Recent studies not been taken to consider particularities inherent in the different methods of hierarchical grouping during the consensus process. This work investigates the impact of the ensemble consistency in the final consensual results considering the differents hierarchical methods uses in the ensemble. We propose a process that retains intermediate transitivity in dendrograms. In this algorithm, firstly, the dendrograms describing the base clustering are converted to an ultrametric matrix. Then, after one fuzzification process, the consensus functions based on aggregation operator with preserve transitivity property is applied to the matrices and form the final consensus matrix. The final clustering will be a dendrogram obtained from this aggregate matrix. Analyzing the results of the experiments performed on the known datasets and also visualizing algorithm’s process on the visual (two-dimensional) datasets shows this approach can significantly improve the accuracy performance once retaining the consistency property.

9
  • EDUARDO ALEXANDRE FERREIRA SILVA
  • Mission-driven Software-intensive System-of-Systems Architecture Design

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • ABDELHAK-DJAMEL SERIAI
  • ELISA YUMI NAKAGAWA
  • FLAVIO OQUENDO
  • KHALIL DRIRA
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • THAIS VASCONCELOS BATISTA
  • Data: Dec 17, 2018


  • Show Abstract
  • Missions represent a key concern in the development of systems-of-systems (SoS) since they can be related to both capabilities of constituent systems and interactions among these systems that contribute to the accomplishment of global goals of the SoS. For this reason, mission models are promising starting points to the SoS development process and they can be used as a basis for the specification, validation and verification of SoS architectural models. Specifying, validating and verifying architectural models for SoS are difficult tasks compared to usual systems, the inner complexity of this kind of systems relies especially on the emergent behaviors, i.e. features that emerge from the cooperation between the constituent parts of the SoS that often cannot be accurately predicted.

    This work is concerned with such a synergetic relationship between mission and architectural models, giving a special attention to the emergent behavior that arise for a given configuration of the SoS. We propose a development process for architectural modeling of SoS, centered in the so-called mission models. In this proposal, the mission model is used to both derive, validate/verify architectures of SoS. In a first moment we dene a formal mission model, then we generate the structural definition for the architecture using model transformation. Later, as the architect specify the behavioral aspects of the system, using this architecture, we can generate concrete architectures that will be verified and validated using simulation-based approaches. The verification uses statistical model checking to verify whether the properties are satisfied, within a degree of confidence. The validation is aimed to emergent behaviors and missions, but can be extended to any aspect of the mission model. The simulation also allows the identification of unpredicted emergent behaviors. A toolset that integrates existing tools and implements the whole process is also presented.

     
2017
Dissertations
1
  • ILUENY CONSTANCIO CHAVES DOS SANTOS
  • A Systematic Approach to Check Compliance Legal Requirements

     

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • JOSUÉ VITOR DE MEDEIROS JÚNIOR
  • GILBERTO AMADO DE AZEVEDO CYSNEIROS FILHO
  • Data: Jan 16, 2017


  • Show Abstract
  • It is common sense that information systems play a vital role in supporting the business processes of companies. Often the software is a strategic important asset for many organizations. Increasingly, laws and regulations are drawn up with the aim of establishing restrictions on existing software systems and companies are required to develop complex systems that comply with current legislation. Legal requirements refer to the set of laws and regulations applying to the software business domain to be developed. These requirements are highly sensitive to changes that occur in the legislation. The laws, as well as society, organizations and software also evolve. Constantly a new law is enacted, an old law is amended, revoked or an important court decision is issued. The dynamism of the laws requires continuous adaptation of the modeled legal requirements. Over the past few years, several studies have been developed addressing this issue, but few address the challenges involving the monitoring and evaluation of legal compliance throughout the system life cycle. The losses made by an organization that does not care about the legal compliance of their software requirements can range from financial losses to losses on its reputation. In this context, it was observed that the management requirements, notably through the use of traceability of system requirements, can play a key role in the process of verifying the legal compliance of the systems. This paper presents an approach proposed able to assist the development team manager in software maintenance activities and verification of compliance with applicable laws. A case study should be conducted in order to evaluate the effectiveness of the proposed tool.

     

2
  • CARINE AZEVEDO DANTAS
  • An Unsupervised-based Feature Selection for Classication tasks

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • DANIEL SABINO AMORIM DE ARAUJO
  • JOAO CARLOS XAVIER JUNIOR
  • ADRIANA TAKAHASHI
  • Data: Feb 10, 2017


  • Show Abstract
  • With the increase of the size on the data sets used in classication systems, selecting
    the most relevant attribute has become one of the main tasks in pre-processing phase.
    In a dataset, it is expected that all attributes are relevant. However, this is not always
    veried. Selecting a set of attributes of more relevance aids decreasing the size of the data
    without aecting the performance, or even increase it, this way achieving better results
    when used in the data classication. The existing features selection methods elect the
    best attributes in the data base as a whole, without considering the particularities of
    each instance. The Unsupervised-based Feature Selection, proposed method, selects the
    relevant attributes for each instance individually, using clustering algorithms to group
    them accordingly with their similarities. This work performs an experimental analysis
    of dierent clustering techniques applied to this new feature selection approach. The
    clustering algorithms k-Means, DBSCAN and Expectation-Maximization (EM) were used
    as selection method. Analyzes are performed to verify which of these clustering algorithms
    best ts to this new Feature Selection approach. Thus, the contribution of this study is to
    present a new approach for attribute selection, through a Semidynamic and a Dynamic
    version, and determine which of the clustering methods performs better selection and get
    a better performance in the construction of more accurate classifiers.

3
  • adorilson bezerra de araujo
  • An Empirical Study to Analyze the Compability of Android Applications to Different Versions of the Platform API

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Feb 14, 2017


  • Show Abstract
  • Android platform is currently the most popular platform for the development of mobile applications, representing more than 80% of the operating systems market for mobile devices. This causes demands for application customizations to handle different devices such as screen size, processing power and available memory, languages and specific user needs. Twenty-three new versions of Android platform have been released since its first release. In order to enable the successful execution of applications on different devices, it is essential to support multiple versions of the Application Programming Interface (API).

    This dissertation aims mainly to analyze, characterize and compare techniques used by Android applications to support multiple versions of the API. In particular, the work seeks: (i) to identify the used techniques to support multiple versions of the Android API in the literature; (ii) to analyze real applications to quantify the use of these indicated techniques; and (iii) to compare the characteristics and consequences of using such techniques. An empirical study, in which 25 popular Android apps were analyzed, was conducted to achieve this goal. The results of the study show that there are three techniques to support multiple versions of the API: i) compatibility package, API gross granularity variabilities involving a set of classes; ii) re-implementation of resource, for specific situations and gross granularity at class level or when resource is not available in compatibility package; and iii) explicit use of the new API, fine granularity variabilities of the API that involves calling of specific methods. Through the analysis of 25 applications, we have identified that compatibility package was used by 23 applications, re-implementation of resource was used by 14 applications and the explicit use of the new API was used by 22 applications. The API fragments contains the most common elements among those released in higher versions of the platform that are used by applications during their evolution, and it is referenced by 68% of them. In general, applications could increase their potential market with adaptations of, on average, 15 code snippets. On the other hand, application developers have been worried about how avoiding dead code based on platform API. In the analysis of 7 applications, 4 of them contained dead code, but it did not represent more than 0.1% of total code

4
  • RANMSÉS EMANUEL MARTINS BASTOS
  • The Traveling Salesman with Passengers and High Occupancy Problem

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 17, 2017


  • Show Abstract
  • The Traveling Salesman with Passengers and High Occupancy Problem is a version of the classic TSP where the salesman is the driver of a vehicle who shares travels’ expenses with passengers. Besides shared expenses, the driver also benefits from discounts of the high-occupancy vehicle lanes, i.e. traffic lanes in which high occupancy vehicles are exempted from tolls. This  paper presents a succinct mathematical model for this problem and an algorithm based on Simulated Annealing and Variable Neighborhood Search metaheuristics. The results of the heuristic algorithm are compared with the optimal solutions obtained by an exact algorithm.This work addresses the study of this novel combinatorial optimization problem, going from the relationship it draws with other ones widely covered by the literature, by the means of a revision of related work, until the conception of artificial test cases to fulfill the purpose of serving as comparison subjects to the experimental algorithms developed to solve it.

5
  • RAIMUNDO LEANDRO ANDRADE MARQUES
  • Multi Mixed Population Evolutionary Algorithm applied to the Multiobjective  Degree Constrained Minimum Spanning Tree Problem

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • MARCO CESAR GOLDBARG
  • MATHEUS DA SILVA MENEZES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 17, 2017


  • Show Abstract
  • In recent years, the Multiobjective Degree Constrained Minimum Spanning Tree Problem, has taken the attention of combinatorial optimization researchers, especially due to its wide usability in practical network modeling design problems. This is a NP-hard problem, even in its mono-objective version for a degree of at least . The new algorithm proposed here called MMPEA, uses shared external archives and different multiobjective optimization techniques in a parallel execution to a better survey of the search space. This MMPEA version adopts the MPAES, NSGA2 and SPEA2 algorithms in its implementation which also are used in the comparison tests. A total of 5040 empirical tests are presented here, including different graph generators, and instances of size 50 up to 1000 vertices.  For a matter of multi-objective trait, the results for these experiments are presented by means of hypervolume and -binary indicators. The significance of computational experiments is evaluated by the Mann-Whitney statistical test.

6
  • TAIZA RABELLO MONTENEGRO
  • ExceptionPolicyExpert: a tool to assist exception handling development

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • FERNANDO MARQUES FIGUEIRA FILHO
  • EIJI ADACHI MEDEIROS BARBOSA
  • FERNANDO JOSÉ CASTOR DE LIMA FILHO
  • Data: Feb 20, 2017


  • Show Abstract
  • As our society becomes more and more dependent of software systems the demand robustness requirements increases. The exception handling mechanism is one of the most used techniques to enable the development of robust software systems develop. The exception handling policy comprises the set of rules that specify how exceptions should be thrown and handled inside a system. But usually the policy is not explicitly defined. As a consequence, it becomes a challenge for developers to create the exception handling code according to it. This work proposes an Eclipse plug-in, called ExceptionPolicyExpert, to guide the developer on how to implement this kind of code by checking policy violations and providing recommendations to developers concerning how to exceptions should be handled and signaled. In order to support the creation of such tool, we performed a qualitative study, using Grounded Theory techniques, to understand which are the main challenges that the developers have during the implementation of the exception handling code. This study showed that most of the developers did not receive any instructions regarding the exception handling policy and they often handle exceptions in a wrong way. Therefore, the proposed tool aims to provide information to developer regarding the exception handling policy integrated to the IDE - helping him/her to develop exception handling code and preventing policy violations. The tool evaluation showed that the tool helps the developer to make decisions when implementing the exception handling code.

7
  • NARALLYNNE MACIEL DE ARAÚJO
  • Open Data from the Brazilian Government: Understanding the Perspectives of Data Suppliers and Developers of Applications to the Citizens

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • NAZARENO ANDRADE
  • Data: Feb 21, 2017


  • Show Abstract
  • Open Government Data (OGD) are seen as a way to promote transparency, as well as providing information to the population by opening data related to various government sectors. Citizens, by using applications developed with this type of data, gain knowledge about a certain public sphere; governments, in turn, are able to promote transparency and improvements through the interaction with citizens who use such applications. However, the creation and success of projects that use OGD depends on developers who are able to extract, process and analyze this information, as well as on the quality of data made available by their suppliers. This research was conducted in two phases: the first phase sought to investigate the perspective of the developers who use Brazilian OGD for the development of applications that aim to promote greater transparency to citizens; in the second phase, we investigate the perspectives of citizens responsible for publishing OGD in portals, i.e. OGD providers. Through twenty-four semi-structured interviews with twelve developers and twelve suppliers, this work reports what motivates them to work with OGD, as well as the barriers they face in this process. Our findings indicate that both actors seek to promote transparency for the population, however they struggle with the poor quality of OGD, cultural barriers, among other issues. In this work, we present and qualitatively characterize these issues. We also provide recommendations, according to the perspectives of developers and data providers, with the aim of bringing benefits to the Brazilian OGD ecosystem and citizens.

8
  • JORGE PEREIRA DA SILVA
  • EcoCIT: A Scalable Platform for the Development of IoT Applications 

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • FLAVIA COIMBRA DELICATO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • PAULO DE FIGUEIREDO PIRES
  • THAIS VASCONCELOS BATISTA
  • Data: Mar 24, 2017


  • Show Abstract
  • The Internet of Things (IoT) paradigm encompasses a hardware and software infrastructure that connects physical devices, known as things, to the digital world. It is estimated that in 2020 there will be nearly 100 billion of connected IoT devices whose data and services will be used for building a myriad of applications. However, developing applications in the IoT context is not a trivial task. Given the large number and variety of devices involved, these applications need to be built and executed in a scalable way to support a large number of connected devices as well as to store and process the huge amount of data they produce. Additionally, applications in this context also need to deal with several different protocols. In this context, middleware platforms have emerged as promising solutions to facilitate application development. These platforms offer standardized interfaces for access to devices, abstracting the developers of details of communication via network, protocols and data formats used by various devices. In this perspective, this work presents the EcoCIT platform, a scalable middleware platform that provides support for the integration of IoT devices to the Internet, as well as the development and execution of IoT applications with scalability requirements through the use of scalable technologies and on-demand computing services provided by cloud computing platforms.

9
  • LUCAS TOMÉ AVELINO CÂMARA
  • Acquisition and analysis of the first mouse dynamics biometrics database for user verification in the online collaborative game League of Legends

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • CARLOS NASCIMENTO SILLA JÚNIOR
  • CHARLES ANDRYE GALVAO MADEIRA
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Jun 26, 2017


  • Show Abstract
  • Digital games are very popular around the world. Some of them are really competitive
    and even considered sports on some countries. League of Legends is the most played game
    today with millions of players, and even world championships are transmitted on the
    internet for the entire world. Account sharing is one of the biggest problem on this game,
    because more skilled players can play on someone else account and help to improve their
    skill level account, causing unbalanced matches on the game. Account sharing is harder
    to solve than account stealing, because the owner wants to allow other person to access
    their account. A mouse dynamics system can be a solution to detect account sharing,
    since the mouse is a lot used by players during a match. Because of this, to investigate the
    efficiency of mouse dynamics on account sharing detection on games is a really interesting
    research, but there is no public database of mouse dynamics for online games or similar
    situations. So a new database was collected and presented in this work. Some statistic
    analysis and classification experiments shown that mouse dynamics features can be used
    for identification of League of Legends players on a smaller context, with a correctness of
    93.2203%.
10
  • ISAAC NEWTON DA SILVA BESERRA
  • Acquisition and analysis of the first keystroke dynamics biometrics database for user verification in the online collaborative game League of Legends.

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • CARLOS NASCIMENTO SILLA JÚNIOR
  • CHARLES ANDRYE GALVAO MADEIRA
  • MARJORY CRISTIANY DA COSTA ABREU
  • Data: Jun 26, 2017


  • Show Abstract
  • The popularity of computer games has grown exponentially in the last few years,
    reaching a point where they stopped being just a child's play and became actual sports.
    Being so, many players invest lots of time and money to improve and become the best. As
    a result, many online games companies protect users' accounts in a variety of ways, such
    as secondary passwords, e-mail conrmation when accessed on other devices and mobile
    text messages. However, none of these techniques apply when it comes to account sharing.
    In the competitive scenario, players are divided by their level of skill, which is obtained
    from their achievements and victories; thus, when a player shares his/her account, s/he is
    classied in a level which does not correspond to his/her actual skill, causing an imbalance
    in matches. The game League of Legends greatly suers with this practice, which is
    popularly known as Elo Job, which is forbidden by the game company itself and, when
    discovered, causes the player to be permanently banned from the game. As the game
    uses the keyboard keystroke dynamics for most of its actions, a continuous verication
    during the game would be ideal, as it could potentially identify whether the player is
    really the owner of the account. As a result, the system could penalise players who share
    their accounts. For this work, a keystroke-based biometrics database was populated with
    data collected from real players. The data were analyzed and tested with several classiers,
    obtaining a hit rate of 65.90%, which is not enough to make a good identication. However,
    the combination of the features of the keystroke dynamics with the mouse dynamics
    showed much better results, reaching a promising hit rate of 90.0%.

11
  • TERESA DO CARMO BARRÊTO FERNANDES
  • ExMinerSOF: Mining Exceptional Information from StackOverflow 

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • FRANCISCO DANTAS DE MEDEIROS NETO
  • LYRENE FERNANDES DA SILVA
  • ROBERTA DE SOUZA COELHO
  • UIRA KULESZA
  • Data: Jun 30, 2017


  • Show Abstract
  • Uncaught exceptions are not an exceptional scenario in current Java applications. They are actually one of the main causes of applications crashes, which can originate from programming errors on the application itself (null pointer dereferences); faults in underlying hardware or re-used APIs.
    Such uncaught exceptions result in exception stack traces that are often used by developers as a source of information for debugging. Currently, this information is ofttimes used by developers on search engines or Question and Answer sites while the developer tries to: better understand the cause of the crash and solve it.
    This study mined the exception stack traces embedded on StackOverflow (SOF) questions and answers. The goal of this work was to two-fold: to identify common characteristics of such stack traces and to investigate how such information can be used to prevent uncaught exceptions during software development. Overall 121.253 exception stack traces were extracted and analyzed in combination with Q&A inspections. This work provides insights on how the information embedded on exception stack traces can used to discover exceptions that can be potentially thrown by API methods but are not part of the API documentation.
    Hence, this study proposes ExMinerSOF tool, which alerts the developer about the exceptions that can be potentially signaled by an API method but are not part of the API documentation - and was discovered by applying a mining strategy in SOF repository. Doing so, the tool enable the developer to prevent faults based on failures reported by the crowd.

12
  • EMERSON BEZERRA DE CARVALHO
  • Experimental Analisy of Variants of the Lin and Kernighan’s heuristic for the Multi-objective Traveling
    Salesman Problem

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • CAROLINA DE PAULA ALMEIDA
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Jul 24, 2017


  • Show Abstract
  • The Lin and Kernighan’s heuristic (LK) is one of the most effective methods for the Traveling Salesman Problem (TSP). Due to this fact, different implementations for the LK were proposed in literature and this heuristic is also used as part of various meta-heuristic algorithms. LK has been used in the context of the multi-objective TSP (MTSP) as originally proposed by its authors, i.e., with a single objective focus. This study investigates variants of the LK heuristic in the multi-objective context. We investigate the potential of LK extensions combined with other metaheuristic techniques. Results of a computational
    experimental are reported for MTSP instances with 2, 3 and 4 objectives.

13
  • NELSON ION DE OLIVEIRA
  • An Interactive Environment to Support the Teaching of  Logi of Programming in the Technical Courses (e-learning) of Instituto Metrópole Digital / UFRN

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • ALEX SANDRO GOMES
  • JORGE TARCISIO DA ROCHA FALCAO
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • Data: Jul 24, 2017


  • Show Abstract
  • The main objective of this work is to present a proposal for intervention in the didactic
    material of the discipline of Programming Logic of the technical courses offered at distance
    modality, from the Digital Metropolis Institute of the Federal University of Rio Grande
    do Norte. This intervention was proposed as a result of an mixed-exploratory research
    with hundreds of students of the offered courses. For the implementation of the proposal,
    a descriptive frequency analysis was performed on quantitative data about the academic
    performance of 2,500 students from the first semesters of the years 2015 and 2016. Online
    questionnaires were applied with more than 600 students to identify the profile of these
    students. We also conducted semi-structured interviews with 37 students, with score and
    age criteria to define the groups interview. Based on the data obtained and analyzed in
    the research, an proposal intervention was implemented by inserting a resource in the
    didactic material used in the Programming Logic discipline that course. This intervention
    adds the feature that allows the student to perform programming exercises using a highlevel
    programming language coupled with the automated feedback feature.

14
  • LEO MOREIRA SILVA
  • PerfMiner Visualizer: a tool for the analysis of performance quality attribute evolution in software systems

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • Felipe Alves Pereira Pinto
  • LYRENE FERNANDES DA SILVA
  • RENATO LIMA NOVAIS
  • UIRA KULESZA
  • Data: Jul 26, 2017


  • Show Abstract
  • During the maintenance and evolution process of a software system, it can undergo several modifications, which can have negative consequences, reducing its quality and increasing its complexity. This deterioration can also affect system performance over time. Thus, without due monitoring, the performance quality attribute may no longer be adequately met. The software visualization area proposes the use of techniques whose objective is to improve the understanding of the software and to make its development process more productive. In this context, this work presents a tool to visualize performance deviations from subsequent evolutions of a software system to assist the analysis of performance evolution between software versions. The tool allows, through call graph and scenario summarization visualizations, that developers and architects can identify scenarios and methods that have had variations in their performance, including the potential causes of such deviations through commits. This work also presents an empirical study that evaluates the use of the tool by applying it to 10 evolutionary versions of 2 open-source systems from different domains and by submitting online questionnaires to obtain feedback from its developers and architects. The results of the conducted study bring preliminary evidence of the effectiveness of visualizations provided by the tool compared to tabular data. In addition, the nodes suppression algorithm of the call graph visualization was able to reduce between 73.77% and 99.83% the number of nodes to be displayed to the user, allowing him to be able to identify more easily the possible causes of variations.

15
  • HUGO HENRIQUE DE OLIVEIRA MESQUITA
  • AN APPROACH TO THE DEVELOPMENT OF EDUCATIONAL GAMES IN K-12

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • ALBERTO SIGNORETTI
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • THIAGO REIS DA SILVA
  • UIRA KULESZA
  • Data: Jul 28, 2017


  • Show Abstract
  • Digital educational games are pointed out as a good tool when used in the classroom, as it inserts the students in a playful learning environment, in a way that provides the development of several skills. However, it is not always possible for the teacher to find games that fit the educational needs that he or she wishes, making it often unfeasible for use in the school context. Based on this scenario, this work presents the design and implementation of a platform that aims to provide teachers and students with the possibility of creating games in a simple and intuitive way, and that allows teachers to adapt the content of the games to their pedagogical objectives. The results of this research contribute to future innovations in the theoretical and practical fields, in the definition of a component-based approach and the development of a platform, promoting the development of digital educational games with different pedagogical objectives. Results suggest that the proposed platform together with its motivational approach have effective results.

16
  • FÁBIO FERNANDES PENHA
  • SMiLe: A Modular Textual Notation to iStar Requirements Models

  • Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • COMMITTEE MEMBERS :
  • FERNANDA MARIA RIBEIRO DE ALENCAR
  • LYRENE FERNANDES DA SILVA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • Data: Jul 28, 2017


  • Show Abstract
  • Computational systems are present everywhere, assuming a determining role in the most diverse activities and areas. The environments, whose systems are needed and designed, will be increasingly complex, social and technical. Thus, the iStar Framework emerges as a modeling language used to prospect both Dependency and Strategic Dependency model and Strategic Rationale models. These models capture and represent the motives and relationships of those involved in the studied environment. This approach has been used in several situations, such as telecommunications, air traffic control, agriculture and health. Despite this, the Framework has been finding it difficult to be widely accepted in the industry, and some studies point out weaknesses of the iStar Framework in the face of complex systems or involving many actors. In this context, the objective of this dissertation is to present and implement a textual notation that complements the iStar Framework's graphical models in order to deal with some challenges related to the use of Istar Models, such as modularity and scalability.

17
  • JOÃO HELIS JUNIOR DE AZEVEDO BERNARDO
  • Impacto da Adoção de Integração Contínua no Tempo de Entrega de Merged Pull Requests: Um Estudo Empírico

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • DANIEL ALENCAR DA COSTA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCELO DE ALMEIDA MAIA
  • ROBERTA DE SOUZA COELHO
  • UIRA KULESZA
  • Data: Jul 31, 2017


  • Show Abstract
  • Integração Contínua (IC) é uma prática de desenvolvimento de software que leva os desenvolvedores a integrarem seu código-fonte mais frequentemente. Projetos de software têm adotado amplamente a IC com o intuito de melhorar a integração de código e lançar novas releases mais rapidamente para os seus usuários. A adoção da IC é usualmente motivada pela atração de entregar novas funcionalidades do software de forma mais rápida e frequente. Todavia, há poucas evidências empíricas para justificar tais alegações. Ao longo dos últimos anos, muitos projetos de software disponíveis em ambientes de codificação social, como o GitHub, tem adotado a prática da IC usando serviços que podem ser facilmente integrados nesses ambientes (por exemplo, Travis-CI). Esta dissertação investiga empiricamente o impacto da adoção da IC no tempo de entrega de pull requests (PRs), através da análise de 167.037 PRs de 90 projetos do GitHub que são implementados em 5 linguagens de programação diferentes. Os resultados mostram que em mediana 13,8% dos PRs tem sua entrega adiada por pelo menos uma release antes da adoção da IC, enquanto que após a adoção da IC, em mediana 24% dos PRs tem sua entrega adiada para futuras releases. Ao contrário do que se pode especular, observou-se que PRs tendem a esperar mais tempo para serem entregues após a adoção da IC na maioria (53%) dos projetos investigados. O grande aumento das submissões de PRs após a IC é uma razão fundamental para que projetos demorem mais tempo para entregar PRs depois da adoção da IC. 77,8% dos projetos aumentam a taxa de submissões de PRs após a adoção da IC. Com o propósito de investigar os fatores relacionados ao tempo de entrega de merged PRs, treinou-se modelos de regressão linear e logística, os quais obtiveram R-Quadrado mediano de 0.72-0.74 e bons valores medianos de AUC de 0.85-0.90. Análises mais profundas de nossos modelos sugerem que, antes e depois da adoção da IC, a intensidade das contribuições de código para uma release pode aumentar o tempo de entrega de PRs devido a uma maior carga de integração (em termos de commits integrados) da equipe de desenvolvimento. Finalmente, apresentamos heurísticas capazes de identificar com precisão os PRs que possuem um tempo de entrega prolongado. Nossos modelos de regressão obtiveram valores de AUC mediano de 0.92 a 0.97.

18
  • JEAN GLEISON DE SANTANA SILVA
  • Traveling Salesman Problem with Ridesharing and Quota 

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • MATHEUS DA SILVA MENEZES
  • Data: Jul 31, 2017


  • Show Abstract
  • The Traveling Salesman Problem with Ridesharing and Quota belongs to the class of Quota Traveling Salesman problems. In this problem, it is considered the economic advantage achieved when the salesman, traveling in a private vehicle, gives ride to passengers who share travel expenses with him. The model can represent real situations where a driver programs a route to visit cities, each of which associated with a bonus, with the requirement of collecting a minimum sum of bonuses and taking into account the possibility of reducing costs due to people embarked in his vehicle. A math model, six evolutionary algorithms and one heuristic are presented for the problem addressed. The behavior of the proposed algorithms is analyzed on a computational experiment with 48 instances.

19
  • LUIZ FERNANDO VIRGINIO DA SILVA
  • Extracting Traffic Data from Videos in Real-Time

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • ANNE MAGALY DE PAULA CANUTO
  • ALUISIO IGOR REGO FONTES
  • Data: Jul 31, 2017


  • Show Abstract
  • Some of the major problems in large cities are related to traffic in public streets. Problems such as traffic jams and vehicle accidents directly impact society in a negative way, and are usually attributed to lack of urban planning from government, the lack of public policy or researches aimed at solving this problems, even if partially. These researches depend on data that must be collected in loco on the main avenues and streets of a city, demanding a big human force to collected them manually, also possibly incurring in error because of its manual characteristic. It is common to see CFTV systems around the city being used as the main mean of traffic monitoring. Thus, we devised a solution capable of collecting these data automatically in real-time, using videos captured using these cameras. The proposed solution consists of the following steps: (i) detect objects usign motion segmentation; (ii) apply pattern recognition methods such as machine learning mehtods or point descriptors to identify and classify vehicles. In this stage, we deal with the problem of occluded vehicles, e; (iii) track each object, now using Kalman filters or the mehtod of Senior in order to obtain relevant traffic data. Initially, we are collecting direction, velocity and flow intensity trhough volumetric counting, that can be used for other purposes, and not only the one handled here. Finally, we demostrate the efficiency of the proposed approach on experiments using videos acquired under different lightning conditions.

     

20
  • ZAILTON SACHAS AMORIM CALHEIROS
  • Travelling Salesman with passengers. 

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • MATHEUS DA SILVA MENEZES
  • Data: Jul 31, 2017


  • Show Abstract
  • This thesis presents a vehicle seat sharing model in order to reduce travel costs for drivers and passengers, contributing significantly to the environment and society. The problem is also described by a linear programming model and it is discussed in some variants of an important subproblem for solving the main problem. Besides, some computational approaches are implemented, composed by evolutionary (genetic and memetic) e constructive (ant optimization) algorithms. In addition to the adaptation of already existing algorithms for the travaling salesman problem as the Lin-Kernighan algorithm. After performing experiments, ant-based algorithms prove itself promising for asymmetric instances while the Lin-Kernighan algorithm takes advantage of its robustness through the implementation of Helsgaun and has a good performance for symmetric instances.

21
  • GABRIEL ALVES VASILJEVIC MENDES
  • Brain-Computer Interface Games based on Consumer-Grade Electroencephalography Devices: Systematic Review and Controlled Experiments

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • FABRICIO LIMA BRASIL
  • LEONARDO CUNHA DE MIRANDA
  • SELAN RODRIGUES DOS SANTOS
  • Data: Jul 31, 2017


  • Show Abstract
  • Brain-computer interfaces (BCIs) are specialized systems that allow users to control a computer or a machine using their brain waves. BCI systems allow patients with severe physical impairments, such as those suffering from amyotrophic lateral sclerosis, cerebral palsy and locked-in syndrome, to communicate and regain physical movements with the help of specialized equipment. With the development of BCI technology in the second half of the 20th century and the advent of consumer-grade BCI devices in the late 2000s, brain-controlled systems started to find applications not only in the medical field, but in areas such as entertainment. One particular area that is gaining more evidence due to the arrival of consumer-grade devices is the field of computer games, which has become increasingly popular in BCI research as it allows for more user-friendly applications of BCI technology in both healthy and unhealthy users. However, numerous challenges are yet to be overcome in order to advance in this field, as the origins and mechanics of the brain waves and how they are affected by external stimuli are not yet fully understood. In this sense, a systematic literature review of BCI games based on consumer-grade technology was performed. Based on its results, two BCI games, one using attention and the other using meditation as control signals, were developed in order to investigate key aspects of player interaction: the influence of graphical elements on attention and control; the influence of auditory stimuli on meditation and work load; and the differences both in performance and multiplayer game experience, all in the context of neurofeedback-based BCI games.

22
  • DALAY ISRAEL DE ALMEIDA PEREIRA
  • An tool extension to formal support to component based development

  • Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
  • COMMITTEE MEMBERS :
  • AUGUSTO CEZAR ALVES SAMPAIO
  • MARCEL VINICIUS MEDEIROS OLIVEIRA
  • MARTIN ALEJANDRO MUSICANTE
  • Data: Aug 18, 2017


  • Show Abstract
  • The component-based development of systems is considered an important evolution in the software
    development process. Using this approach, the system maintenance is facilitated, bringing
    more reliability and reuse of components. However, the composition of components (and their
    interactions) is still the main source of problems and requires a more detailed analysis. This
    problem is even more relevant when dealing with safety-critical applications. An approach
    for specifying this kind of applications is the use Formal Methods, which are an accurate tool
    for system specification that has strong mathematical background which brings, among other
    benefits, more safety. As an example, the formal method CSP allows the specification of concurrent
    systems and the verification of properties inherent to such systems. CSP has a set of
    tools for its verification, like, for instance, FDR. Using CSP, one can detect and solve problems
    like deadlock and livelock in a system, although it can be costly in terms of time spent
    in verifications. In this context, BRICK has emerged as a CSP based approach for developing
    asynchronous component-based systems, which guarantees deadlock freedom by construction.
    This approach uses CSP to specify the constraints and interactions between the components to
    allow a formal verification of the system. However, the practical use of this approach can be
    too complex and cumbersome. In order to automate the use of the BRICK approach we have
    developed a tool (BTS - BRICK Tool Support) that automates the verifications of component
    compositions by automatically generating and checking the side conditions imposed by the approach
    using FDR. But, due to the number and complexity of the verifications made in FDR,
    the tool can still take too much time in this process.
    In this dissertation proposal, we present an extension to BTS that improves the way how
    it make verifications by replacing the FDR used inside the tool by its most recent version and
    adding a SMT-solver, that, concurrently, checks some properties of the specification. We also
    evaluated the tool with a new case study, comparing the verifications made in the older version
    of the tool with this new approach of verification. The internal structure of the tool has also
    been improved in order to make it easier to be extended, which is also an contribution of our
    work.

23
  • ALAN DE OLIVEIRA SANTANA
  • Generation of virtual tutors for PBL-based classes

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROBERTO ALMEIDA BITTENCOURT
  • Data: Nov 30, 2017


  • Show Abstract
  • The high rates of failure and avoidance in computing courses become a limiting factor for the development of several professional areas, making the supply of skilled labor scarce. In this sense, courses of game development can mitigate this factor, since they allow to abstract the information and to understand where each concept studied is applied in a playful way. In this context, the use of the PBL model adds value by allowing the student to learn to learn, developing personal characteristics of interpretation and problem solving in an efficient way, as well as skills related to teamwork. However, PBL-based courses have reduced classes to enable the teacher to assist students more effectively. In contrast, this factor limits the opening of more places, due to the cost with teachers, materials per class, among others, both in faceto-face and in distance education. A way to mitigate these costs and improve the efficiency of PBL-based courses through digital virtual tutors. These tutors have features that simulate a teacher and allow them to direct and assist students, however, with more time available, allowing students to learn at their own pace. In this context, note that within a class, students also have different goals and abilities. In this way, the objective of this dissertation is to propose an architecture of generation of virtual tutors for different profiles of students, to be applied in programming courses of games based on the PBL model. In order to evaluate the demand and contributions that this proposal can cause, studies were carried out in order to collect the necessary data to develop the tutors for each student profile and to evaluate the beneficial results that each tutor can bring to each profile. Thus the first study was to develop a systematic review of the literature in order to observe how tutors were developed, mainly related to affective computation. After the results of this study, a prototype was developed for targeted and objective classes based on missions (activities related to the PBL model), and contained in the system an animated virtual character that is currently exhibiting the contents. This prototype was applied to an experiment with middle-level students that allowed the development of a new tutor for a new, more interactive profile of students. This new prototype was developed based on the ALICE chatbot and the AIML language. Upon completion of this tutor, an experiment was conducted using both tutors to evaluate the proposal of a tutors generator to be applied to the same class. This experiment was applied to undergraduate students in order to allow these teachers to replicate the study in their classes in the near future. Finally, the data were analyzed and it was possible to answer the research questions developed during the study. The results presented in this work showed that the students had fun during the classes, motivated by the ludic factor connected to the programming classes of games and virtual tutors. This factor allowed the students to present good results during the development of the games, with more than 75% of the students concluding the problems proposed, with the others presenting around 90% of the problems. Another point observed was that the groups presented a profile distinction in order of 56% to 44% of dispersion among the tutors who most adapted to their profile. These data suggest the possibility of inserting a tutors generator for different profiles of students in this class, however, other studies are needed to show more clearly this presented data. In general, tutors have made classes more dynamic and productive, proving to be excellent tools to support students in the development of PBL-based games.

Thesis
1
  • DANIEL ALENCAR DA COSTA
  • Understanding the Delivery Delays of Addressed Issues in Large Software Projects

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • FERNANDO MARQUES FIGUEIRA FILHO
  • AHMED HASSAN
  • MARCELO DE ALMEIDA MAIA
  • MARCO TULIO DE OLIVEIRA VALENTE
  • Data: Feb 8, 2017


  • Show Abstract
  • The timely delivery of addressed software issues (i.e., bug fixes, enhancements, and new features) is what drives software development. Previous research has investigated what impacts the time to triage and address (or fix) issues. Nevertheless, even though an issue is addressed, i.e., a solution is coded and tested, such an issue may still suffer delay before being delivered to end users. Such delays are frustrating, since end users care most about when an addressed issue is available in the software system (i.e, released). In this matter, there is a lack of empirical studies that investigate why addressed issues take longer to be delivered compared to other issues.

    In this thesis, we perform empirical studies to understand which factors are associated with the delayed delivery of addressed issues. In our studies, we find that 34% to 98% of the addressed issues of the ArgoUML, Eclipse and Firefox projects have their integration delayed by at least one release. Our explanatory models achieve ROC areas above 0.74 when explaining delivery delay. We also find that the workload of integrators and the moment at which an issue is addressed are the factors with the strongest association with delivery delay. We also investigate the impact of rapid release cycles on the delivery delay of addressed issues. Interestingly, we find that rapid release cycles of Firefox are not related to faster delivery of addressed issues. Indeed, although rapid release cycles address issues faster than traditional ones, such addressed issues take longer to deliver. Moreover, we find that rapid releases deliver addressed issues more consistently than traditional ones. Finally, we survey 37 developers of the ArgoUML, Eclipse, and Firefox projects to understand why delivery delays occur. We find that the allure of delivering addressed issues more quickly to users is the most recurrent motivator of switching to a rapid release cycle. Moreover, the allure of improving the flexibility and quality of addressed issues is another advantage that are perceived by our participants. Additionally, the perceived reasons for the delivery delay of addressed issues are related to decision making, team collaboration, and risk management activities. Moreover, delivery delay likely leads to user/developer frustration according to our participants. Our thesis is the first work to study such an important topic in modern software development. Our studies highlight the complexity of delivering issues in a timely fashion (for instance, simply switching to a rapid release cycle is not a silver bullet that would guarantee the quicker delivery of addressed issues). 

2
  • DEMOSTENES SANTOS DE SENA
  • An Approach to Support the Extraction of Exception Handling Policies

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • ROBERTA DE SOUZA COELHO
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • UIRA KULESZA
  • EIJI ADACHI MEDEIROS BARBOSA
  • FRANCISCO DANTAS DE MEDEIROS NETO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Feb 13, 2017


  • Show Abstract
  • The Exception handling (EH) mechanisms is a technique embedded in most of the mainstream programming languages to support the development of robust systems. The exception handling policy is composed by the set of exception handling design rules and which specify the elements (methods, classes and packages) responsible for raising, propagating and catching of exceptions as well as the handling actions. Historically, implementation of the exception handling code is postponed or ignored in the software process development. As a consequence, empirical studies have demonstrated that the inappropriate exception handling is a source of bug hazards. Due to the implicit nature of exception flows, the identification of exception handling code is a complex task. In order to address the problems resulting from the not-understood or inadequate exception handling, some approaches have been proposed. Some of them expose the exception flows (e.g. graphically) and others define exception handling rule languages with tool support for EH policy definition and checking. However, none of the proposed approaches provide support to the phase of exception policy definition.

    This work proposes an approach helps the architect to extract the EH rules by performing an analysis on the existing code. Doing so, this approach fills a gap previous the EH policy definition, its comprehension and checking. In order to support the proposed approach, a static tool suite was developed, which performs: (i) the discovery of exception flows and its handling actions, (ii) the definition of compartments (iii) the semi-automatic rule extraction process, and (iv) the rule checking and identification of rule violation causes. This approach was performed in two empirical studies. In the first study, 656 libraries from Maven central repository were analyzed. The main goal of this study was revealed and characterized the exception handling policy of analyzed libraries. This study revealed that 80.9% of analyzed libraries have exception flows that implement at least an exceptional anti-pattern. In the second study, we investigated the benefits of rule extraction process in the understanding and the refinement of exception handling policy. Two web information systems (i.e., IProject and SIGAA) were analyzed in this second study. We found that all of extracted rules belonged in the set of rules reported by the architects and the result of extract process allowed that new rules were added to the policy. These added rules corresponded to 57.1% (IProject) and 52.8% (SIGAA/Graduação) of the rules of analyzed systems. The checking process of defined rules supported by our approach verified that 35.6% (IProject) and 45.7% (SIGAA/Graduação) of exception flows violated some defined rule.

3
  • THIAGO REIS DA SILVA
  • Investigating the Use of Online Game Programming in K-12 Education

  • Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
  • COMMITTEE MEMBERS :
  • AYLA DÉBORA DANTAS DE SOUZA REBOUÇAS
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • ROMMEL WLADIMIR DE LIMA
  • SÉRGIO CASTELO BRANCO SOARES
  • Data: Mar 11, 2017


  • Show Abstract
  • The introduction of activities or even related subjects to the teaching of programming in schools has been increasingly discussed, with enthusiasm in the associations and commissions that deal with teaching of Computing.
    In Brazil, the Brazilian Computer Society recommends that notions of programming be taught from K-12 Education, arguing that the associated skills, when built early, contribute to the development of logical reasoning and problem solving, which helps to increase the Number of professionals in the area in addition to working the vocational awakening of young people. Digital games are elements that have been used a lot in the insertion of the teaching of programming in the schools for offering playful and interactive moments in the learning process, are attractive to the student and increase the student's readiness to learn. In this context, this doctoral work proposes: (i) to create a methodology for teaching online digital game programming that can be used on a large scale; (Ii) teach video game programming in schools; And (iii) conducting empirical studies that seek to identify the benefits and limitations of the proposed methodology.

4
  • HANIEL MOREIRA BARBOSA
  • New techniques for instantiation and proof production in SMT solving

  • Advisor : DAVID BORIS PAUL DEHARBE
  • COMMITTEE MEMBERS :
  • ANDREW REYNOLDS
  • CATHERINE DUBOIS
  • DAVID BORIS PAUL DEHARBE
  • ERIKA ÁBRAHÁM
  • JOAO MARCOS DE ALMEIDA
  • PASCAL FONTAINE
  • PHILIPP RÜMMER
  • STEPHAN MERZ
  • Data: Sep 5, 2017


  • Show Abstract
  • A variety of real world applications, such as formal verification, program synthesis, automatic testing, and program analysis, rely on satisfiability modulo theories (SMT) solvers as backends to automatically discharge proof obligations. While these solvers are extremely efficient at handling large ground formulas with theory symbols, such as arithmetic ones, they still struggle to deal with quantifiers.
    
    Our first contribution in this thesis is in providing a uniform and efficient framework for reasoning with quantified formulas in CDCL(T), the calculus more commonly used in SMT solvers, in which generally various instantiation techniques are employed to handle quantifiers.  We show that the major instantiation techniques can be all cast in a unifying framework for quantified formulas with equality and uninterpreted functions. This framework is based on the problem of E-ground (dis)unification, a variation of the classic rigid E-unification problem.  We introduce a sound and complete calculus to solve this problem in practice: Congruence Closure with Free Variables (CCFV). An experimental evaluation of the implementation of CCFV in the SMT solver veriT validates the approach. veriT exhibits significant improvements and is now competitive with state-of-the-art solvers in several benchmark libraries stemming from real world applications.
    
    Our second contribution is a framework for processing formulas in SMT solvers, with generation of detailed proofs. Our goal is to increase the reliability on the results of automated reasoning systems, by providing justifications which can be efficiently checked independently, and to improve their usability by external applications. Proof assistants, for instance, generally require the reconstruction of the justification provided by the solver in a given proof obligation.
    
    The main components of our proof-producing framework are a generic contextual recursion algorithm and an extensible set of inference rules. Clausification, skolemization, theory-specific simplifications, and expansion of ‘let’ expressions are instances of this framework. With suitable data structures, proof generation adds only a linear-time overhead, and proofs can be checked in linear time. We also implemented the approach in veriT. This allowed us to dramatically simplify the code base while increasing the number of problems for which detailed proofs can be produced.
    
5
  • HELBER WAGNER DA SILVA
  • A Cross-Layer Framework for High Resource Demanding Multiuser Session Control in Softwarized IoT Networks

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • ANTÔNIO ALFREDO FERREIRA LOUREIRO
  • AUGUSTO JOSE VENANCIO NETO
  • EDUARDO COELHO CERQUEIRA
  • GIBEON SOARES DE AQUINO JUNIOR
  • THAIS VASCONCELOS BATISTA
  • Data: Nov 29, 2017


  • Show Abstract
  • Mission Critical Applications (AMC) represent one of the most promising use cases in the Internet of Things (IoT) as they promise to impact vital areas, such as smart surveillance in environments with high human density, autonomous vehicle traffic with security, remote surgery with precision, among many others. AMC are expected to exploit the content made available by IoT platforms in softwarized IoT networks (IoTS) scenarios, which represents an IoT system running over a network infrastructure which includes software-defined network substrate to allow flexibility in control operations. However, AMCs have strong Quality of Service (QoS) requirements, such as latency, jitter and losses, as well as the high demand for network resources (e.g., node processing, paths and link bandwidth) that need to be guaranteed by IoTS to ensure efficiency and accuracy. The variability and dynamicity of service requirements in this scenario are very high, ranging from scalar data collection (e.g., environmental sensors, etc.) to digital multimedia (e.g., video and audio) processing in real time. In this IoTS scenario with QoS-sensitive AMC, it is necessary to have a control plan with the capacity to provide a fine-grained transport service with guaranteed quality, in an optimized and autonomous way. This thesis goes beyond the state of the art by defining a holistic framework for controlling multiuser sessions (aggregating multiple AMCs sharing content from an IoT platform) highly quality-sensitive in an IoTS, with fine-grained methods for quality-oriented self-organized orchestration, resource control and management. The framework, called CLASSICO (Cross-LAyer SDN SessIon COntrol), allows to couple IoTS variability and dynamicity, dynamically allocating resources to meet AMC requirements regarding high bandwidth and very low latencies throughout the session duration, leveraging the substrate of the Software Defined Networks (SDN) for flexibility and modularity. To achieve these objectives, CLASSICO defines a cross-layer control plan integrated with IoTS that considers the content parameters (at the application level) required by the AMC to build and maintain QoS-oriented multiuser sessions, and induces an optimized multiuser IoTS through group-based transport (at the network level), while increasing the scalability of the IoT system. For validation purposes, CLASSICO has been prototyped and evaluated on a real testbed in a video use case. The results of the evaluation reveal the gains of CLASSICO regarding QoS and Quality of Experience (QoE), in comparison to an SDN Multicast-based solution.

6
  • CLEVERTON HENTZ ANTUNES
  • A Family of Coverage Criteria Based on Patterns to the Test of Metaprograms

  • Advisor : ANAMARIA MARTINS MOREIRA
  • COMMITTEE MEMBERS :
  • ANAMARIA MARTINS MOREIRA
  • MARTIN ALEJANDRO MUSICANTE
  • PAULO HENRIQUE MONTEIRO BORBA
  • ROHIT GHEYI
  • UMBERTO SOUZA DA COSTA
  • Data: Dec 15, 2017


  • Show Abstract
  • Although there are several techniques for the automatic generation of test data based on grammars, few studies have been proposed to improve the test data generated by applying semantic restrictions. In this sense, we intend in this work to contribute in this direction for the particular case of testing metaprograms, programs that have as input other programs. Currently, the natural alternative to testing this kind of program is using the grammar-based testing. That testing technique can be applied relatively easily, but with high costs, related to the generation and execution of the test set, and low effectiveness. Many researches and tools dedicated to the development of metaprograms make heavy use of pattern matching for their implementation and specification. In this case, the patterns offer an interesting source of information for creating tests that are syntactically valid and also satisfy semantic constraints. Given the limitation of grammar-based testing and pattern information on the metaprograms, we have an opportunity to contribute to the improvement of the testing process for these programs. Therefore, the goal of this work is to evaluate the use of pattern information for the testing of metaprograms and thus contribute to their testing process. In order to systematize the software testing process, a family of coverage criteria based on patterns is proposed to test metaprograms efficiently and systematically. Four pattern-based coverage criteria are proposed, they are based on classical input space partitioning combination criteria. Furthermore, a hierarchical relationship between the criteria is presented. Therefore, different levels of rigor can be required by choosing the appropriate criterion. The validation of these contributions is made using a case study and an empirical validation. The case study presents a reference instantiation for the test design process applied to a type checker implemented as metaprogram based on patterns. The type checker is tested using a test set generated by the pattern-based coverage criteria and the quality of this set is evaluated using the mutation technique. The results obtained are compared with those produced by a test set generated by the grammar-based criteria. The experimental studies indicate the effectiveness of the application of these pattern-based criteria and a gain of cost-return in relation to the grammar-based coverage criteria.

2016
Dissertations
1
  • MARCOS OLIVEIRA DA CRUZ
  • AccNoSys: Uma Arquitetura Adaptativa Aceleradora com Interconexão baseada em Rede em Chip

     
  • Advisor : MONICA MAGALHAES PEREIRA
  • COMMITTEE MEMBERS :
  • MONICA MAGALHAES PEREIRA
  • MARCIO EDUARDO KREUTZ
  • IVAN SARAIVA SILVA
  • SILVIO ROBERTO FERNANDES DE ARAUJO
  • Data: Jan 22, 2016


  • Show Abstract
  • A evolução dos processadores tem sido marcada pela crescente demanda por desempenho para atender as aplicações cada vez maiores e mais complexas. Juntamente com essa necessidade de desempenho, a heterogeneidade das aplicações exige também uma grande flexibilidade dos processadores. Os processadores convencionais são capazes de fornecer desempenho ou flexibilidade, mas sempre privilegiando um desses aspectos em detrimento do outro. Arquiteturas adaptativas aceleradoras de granularidade grossa têm sido propostas como uma solução capaz de oferecer, ao mesmo tempo, flexibilidade e desempenho. No entanto, um dos principais desafios desse tipo de arquitetura é o mapeamento de aplicações que é um problema NP-Completo. Dentre os fatores que contribuem para essa complexidade está o modelo de interconexão utilizado, que normalmente, se baseia em crossbar ou algum modelo próximo ao crossbar. Técnicas de  exploração de paralelismo, como  software pipelining, também são usadas para atingir melhor desempenho. Essas técnicas aumentam ainda mais a complexidade dos algoritmos de mapeamento. Este trabalho apresenta uma arquitetura adaptativa que utiliza um mecanismo de comunicação baseado em envio de pacotes para interconectar unidades funcionais. A arquitetura combinada com o modelo de interconexão é capaz de explorar paralelismo em dois níveis, a saber, ILP (incluindo técnicas de software pipeline) e TLP. O mapeamento das aplicações deve ser efetuado em tempo de compilação utilizando um algoritmo desenvolvido para a arquitetura de complexidade O(1). A arquitetura foi implementada em SystemC e a execução de diversas aplicações foi simulada, explorando tanto ILP quanto TLP. As simulações obtiveram, em média, 41% de ganho de desempenho em comparação com um processador RISC de 8 estágios de pipeline. Os resultados obtidos nas simulações confirmam que é possível explorar o paralelismo inerente das aplicações. Além disso a partir da escolha do modelo de mapeamento (como exploração de threads, ou de paralelismo no nível de instruções, laços, etc) é possível obter diferentes resultados através da adaptação da arquitetura a aplicação.

2
  • LUCAS DANIEL MONTEIRO DOS SANTOS PINHEIRO
  • Algoritmos Experimentais para o Problema Biobjetivo da Árvore Geradora Quadrática em Adjacência de Arestas

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • RICHARD ADERBAL GONÇALVES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 3, 2016


  • Show Abstract
  • O problema da Árvore Geradora Mínima Quadrática (AGMQ) é uma generalização do
    problema da Árvore Geradora Mínima onde, além dos custos lineares das arestas, custos
    quadráticos associados a cada par de arestas são considerados. Os custos quadráticos são
    devidos à custos de interação entre as arestas. No caso das interações ocorrerem somente
    entre arestas adjacentes, o problema é denominado Árvore Geradora Mínima Quadrática
    em Adjacência de Arestas (AGMQA). Tanto a AGMQ quanto a AGMQA são NP-difíceis
    e modelam diversos problemas reais envolvendo projeto de redes de infraestrutura. Os
    custos lineares e quadráticos são somados nas versões mono-objetivo destes problemas.
    Frequentemente, aplicações reais lidam com objetivos conflitantes. Nestes casos a consideração dos custos lineares e quadráticos separadamente é mais adequada e a otimização
    multiobjetivo provê modelos mais realistas. Algoritmos exatos e heurísticos são investigados neste trabalho para a versão biobjetivo da AGMQA. As seguintes técnicas são
    propostas: backtracking, branch-and-bound, busca local, Greedy Randomized
    Adaptive Search Procedure, Simulated Annealing, NSGAII, Algoritmo Transgenético, Otimização por Nuvem de Partículas e uma hibridização entre a técnica do MOEA-D e
    o Algoritmo Transgenético. São utilizados indicadores de qualidade Pareto concordantes
    para comparar os algoritmos em um conjunto de instâncias de bases de dado da literatura.

3
  • INGRID MORGANE MEDEIROS DE LUCENA
  • Uma Revisão de Modelos e Algoritmos de Otimização para a Área de Teste de Software

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • RICHARD ADERBAL GONÇALVES
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Feb 3, 2016


  • Show Abstract
  • A área denominada de Engenharia de Software Baseada em Pesquisa (Search Based Software Engineering) vem crescendo nas últimas décadas e possui um grande número de trabalhos dedicados a ela. Esta área reúne a Engenharia de Software e a Otimização no desenvolvimento de algoritmos que otimizem os custos de atividades inerentes ao processo do desenvolvimento de software. Dentre tais atividades está o teste de software, o qual visa verificar, detectar e corrigir possíveis erros cometidos pelos programadores. Uma vez que esta atividade é responsável por até 50% do custo total do desenvolvimento, os pesquisadores buscam minimizar o custo dos testes sem comprometer a qualidade do software. Os primeiros trabalhos abordando atividades de Teste de Software como problemas de otimização surgiram na década de 70. Este trabalho tem por objetivo realizar uma revisão do estado-da-arte das técnicas e algoritmos de otimização desenvolvidos para teste de software, estendendo um trabalho anterior, com a revisão de 415 artigos da área. É, também, apresentada uma classificação de tais trabalhos quanto aos tipos de métricas, algoritmos de otimização e outras características dos problemas inerentes ao teste de software.

4
  • HUDSON GEOVANE DE MEDEIROS
  • Investigações sobre Técnicas de Arquivamento para Otimizadores Multiobjetivo

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • AURORA TRINIDAD RAMIREZ POZO
  • Data: Feb 5, 2016


  • Show Abstract
  • Problemas multiobjetivo, diferentes daqueles com um único objetivo, possuem, em geral, diversas soluções ótimas, as quais compõem o conjunto Pareto ótimo. Uma classe de algoritmos heurísticos para tais problemas, aqui chamados de otimizadores, produz aproximações deste conjunto. Devido ao grande número de soluções geradas durante a otimização, muitas delas serão descartadas, pois a manutenção e comparação frequente entre todas elas poderia demandar um alto custo de tempo. Como uma alternativa a este problema, muitos otimizadores lidam com arquivos limitados. Um problema que surge nestes casos é a necessidade do descarte de soluções não-dominadas, isto é, ótimas até então. Muitas técnicas foram propostas para lidar com o problema do descarte de soluções não–dominadas e as investigações mostraram que nenhuma delas é completamente capaz de prevenir a deterioração dos arquivos. Este trabalho investiga uma técnica para ser usada em conjunto com as propostas previamente na literatura, a fim de para melhorar a qualidade dos arquivos. A técnica consiste em reciclar periodicamente soluções descartadas. Para verificar se esta ideia pode melhorar o conteúdo dos otimizadores durante a otimização, ela foi implementada em três algoritmos da literatura e testada em diversos problemas. Os resultados mostraram que, quando os otimizadores já conseguem realizar uma boa otimização e resolver os problemas satisfatoriamente, a deterioração é pequena e o método de reciclagem ineficaz. Todavia, em casos em que o otimizador deteriora significativamente, a reciclagem conseguiu evitar esta deterioração no conjunto de aproximação.

5
  • GUILHERME FERNANDES DE ARAÚJO
  • Algoritmos Meta-heurísticos Para a Solução do Problema do Caixeiro Viajante com Caronas Múltiplas

     

  • Advisor : MARCO CESAR GOLDBARG
  • COMMITTEE MEMBERS :
  • MARCO CESAR GOLDBARG
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • LUCÍDIO DOS ANJOS FORMIGA CABRAL
  • Data: Feb 12, 2016


  • Show Abstract
  • O Problema do Caixeiro Viajante com Caronas Múltiplas (PCV-MCa) é uma classe do Caixeiro Viajante Capacitado que apresenta a possibilidade de compartilhamento de assentos para passageiros aproveitando os deslocamentos do caixeiro entre as localidades do ciclo. O caixeiro divide o custo do trajeto com os passageiros embarcados. O modelo pode representar uma situação real em que, por exemplo, motoristas estão dispostos a compartilhas trechos de sua viagem com turistas que pretendem se deslocar entre duas localidades visitadas pela rota do motorista, aceitando compartilhar o veículo com outros indivíduos e visitando outras localidades do ciclo.

6
  • RAYRON VICTOR MEDEIROS DE ARAUJO
  • A probabilistic analysis of the biometrics menagerie existence: a case study in fingerprint data

  • Advisor : MARJORY CRISTIANY DA COSTA ABREU
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • MARJORY CRISTIANY DA COSTA ABREU
  • DANIEL SABINO AMORIM DE ARAUJO
  • GEORGE DARMITON DA CUNHA CAVALCANTI
  • Data: Feb 18, 2016


  • Show Abstract
  • Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.

7
  • LEANDRO DE ALMEIDA MELO
  • Uma proposta de indicadores para o acompanhamento de alunos em projetos de desenvolvimento colaborativo de software com foco no desenvolvimento de habilidades transversais

  • Advisor : FERNANDO MARQUES FIGUEIRA FILHO
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • ANDRE MAURICIO CUNHA CAMPOS
  • IGOR FABIO STEINMACHER
  • Data: Feb 22, 2016


  • Show Abstract
  • Habilidades transversais e práticas de desenvolvimento em projetos foram identificadas como as principais deficiências dos egressos de cursos de computação. Essa problemática motivou a realização de uma pesquisa qualitativa sobre os desafios encontrados por professores desses cursos na condução, acompanhamento e avaliação de projetos colaborativos de desenvolvimento de software. Dentre os desafios identificados, destacam-se as dificuldades para acompanhar e avaliar a participação dos alunos em projetos acadêmicos. Nesse contexto, uma segunda pesquisa de natureza quantitativa foi realizada com o objetivo mapear habilidades transversais dos alunos a um conjunto de indicadores que podem ser extraídos a partir de repositórios de software usando técnicas de mineração de dados. Tais indicadores visam auxiliar o professor no acompanhamento de habilidades transversais, tais como: a participação no trabalho em equipe, a liderança, resolução de problemas e o ritmo de comunicação durante projetos. Para isto, uma abordagem de avaliação por pares foi aplicada em uma turma de desenvolvimento colaborativo do curso de Engenharia de Software da Universidade Federal do Rio Grande do Norte (UFRN). Essa pesquisa apresenta um estudo de correlação entre os scores das habilidades transversais dos alunos e os indicadores baseados na mineração de repositórios de software. O objetivo da pesquisa é melhorar a compreensão das dinâmicas de trabalho em projetos colaborativos de estudantes, assim como incentivar o desenvolvimento de habilidades transversais que são exigidas pela indústria de desenvolvimento de software.

8
  • MÁRIO ANDRADE VIEIRA DE MELO NETO
  • Uma Plataforma Adaptável para Localização em Ambientes Internos 

  • Advisor : GIBEON SOARES DE AQUINO JUNIOR
  • COMMITTEE MEMBERS :
  • GIBEON SOARES DE AQUINO JUNIOR
  • THAIS VASCONCELOS BATISTA
  • CARLOS ANDRE GUIMARÃES FERRAZ
  • Data: Feb 22, 2016


  • Show Abstract
  • Os sistemas de localização têm se tornado cada vez mais parte integrante da vida das pessoas. Em ambientes externos, o GPS se apresenta como tecnologia padrão, largamente difundida e utilizada. No entanto, as pessoas costumam passar a maior parte do seu tempo diário dentro de ambientes internos, como: hospitais, universidades, fábricas, edi- fícios, entre outros. Nesses ambientes, o GPS tem seu funcionamento comprometido não obtendo um posicionamento preciso. Atualmente, para realizar a localização de pessoas ou objetos em ambientes internos não existe nenhuma tecnologia que consiga atingir os mesmos resultados obtidos pelo GPS em ambientes externos. Devido a isso, é necessário considerar a utilização de informações provenientes de diversas fontes fazendo uso de di- ferentes tecnologias. Dessa forma, esse trabalho tem como objetivo geral construir uma plataforma Adaptável para localização ambientes internos. Baseado nesse objetivo, é pro- posta a plataforma Indolor. Essa plataforma tem como objetivos permitir o recebimento de informações provenientes de diferentes fontes, além de realizar o processamento, fusão, armazenamento e disponibilização dessas informações. 

9
  • JOILSON VIDAL ABRANTES
  • Especificação e Monitoramento Dinâmico da Política de Tratamento de Exceções

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • Felipe Alves Pereira Pinto
  • ROBERTA DE SOUZA COELHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Feb 25, 2016


  • Show Abstract
  • A política de tratamento de exceções de um sistema compreende o conjunto de regras de design que especificam o comportamento e tratamento das condições excepcionais, ou seja, define como as exceções devem ser manuseadas e disparadas. Essa política geralmente não é documentada e fica definida implicitamente pelo arquiteto do sistema. Por essa razão os desenvolvedores podem pensar que apenas inserindo blocos try-cach em todos locais onde exceções podem potencialmente ser lançadas estão lidando adequadamente com as condições excepcionais de um sistema. Porém este comportamento pode transformar o tratamento das condições excepcionais em uma generalização do mecanismo "goto", tornando o programa mais complexo e menos confiável. Este trabalho propõe uma linguagem específica de domínio, chamada ECL (Exception Contract Language) para especificar a política de tratamento de exceções e uma ferramenta de monitoramento em tempo de execução que verifica dinamicamente a política de tratamento de exceções. Essa ferramenta é chamada de DAEH (Dynamic Analysis of Exception Handling e é implementada na forma de uma biblioteca de aspectos, que pode ser adicionada a uma aplicação Java sem a necessidade de alterar o código fonte da mesma. Esta abordagem foi aplicada a dois sistemas WEB, a quatro versões do framework JUnit e a uma aplicaticação móvel. Os resultados indicam que esta abordagem pode ser usada para expressar e automaticamente verificar a política de tratamento de exceções de sistemas, e, consequentemente apoiar o desenvolvimento de sistemas Java mais robustos.

10
  • FABIO DE SOUSA LEAL
  • SLA-Based Guidelines for Database Transitioning 

  • Advisor : MARTIN ALEJANDRO MUSICANTE
  • COMMITTEE MEMBERS :
  • MARTIN ALEJANDRO MUSICANTE
  • MARCIA JACYNTHA NUNES RODRIGUES LUCENA
  • GENOVEVA VARGAS SOLAR
  • PLACIDO ANTONIO DE SOUZA NETO
  • Data: Feb 26, 2016


  • Show Abstract
  •  

    Resumo

    Engenharia de Software Baseada em Componentes (CBSE) e Arquitetura Orientada a Serviços (SOA) tornaram-se formas populares de se desenvolver software nos últimos anos. Durante o ciclo de vida de um software, vários componentes e serviços podem ser desenvolvidos, evoluídos e substituídos. Em ambientes de produção, a substituição

    de componentes essenciais - como os que envolvem bancos de dados - é uma operação delicada, onde várias restrições e stakeholders devem ser considerados.

    Service-Level agreement (acordo de nível de serviço - SLA), de acordo com o glossário oficial da ITIL v3 , é “um acordo entre um provedor de serviço de TI e um cliente. O acordo consiste em um conjunto de restrições mensuráveis que um prestador de serviços deve garantir aos seus clientes.”. Em termos práticos, um SLA é um documento que um prestador de serviço oferece aos seus consumidores garantindo níveis mínimos de qualidade de serviço (QoS).

    Este trabalho busca avaliar a utilização de SLAs para guiar o processo de transição de bancos de dados em ambientes de produção. Em particular, propomos um conjunto de guidelines baseados em SLAs para apoiar decisões migrações de bancos de dados rela- cionais (RDBMS) para bancos NoSQL. Nosso trabalho é validado por estudos de caso. 

11
  • MANOEL PEDRO DE MEDEIROS NETO
  • Veículos Aéreos Não Tripulados e Sistema de Entrega: Estudo, Desenvolvimento e Testes

  • Advisor : LEONARDO CUNHA DE MIRANDA
  • COMMITTEE MEMBERS :
  • EDUARDO BRAULIO WANDERLEY NETTO
  • LEONARDO CUNHA DE MIRANDA
  • MONICA MAGALHAES PEREIRA
  • UIRA KULESZA
  • Data: Feb 29, 2016


  • Show Abstract
  • Veículos não tripulados estão cada vez mais presentes no cotidiano das empresas e das pessoas, pois esse tipo de veículo está de forma crescente desempenhando atividades que anteriormente eram apenas executadas por seres humanos. No entanto, para se compreender melhor o potencial de veículos não tripulados, é importante conhecer seus tipos, características, aplicações, limitações e desafios, pois somente com esse conhecimento pode-se entender as potencialidades do uso de veículos dessa natureza em aplicações variadas. Nesse contexto, na primeira parte desta pesquisa foram estudados os diferentes tipos de veículos não tripulados, i.e. terrestres, aquáticos, aéreos e híbridos. Durante a segunda fase da pesquisa, foi realizado um aprofundamento tendo como foco as interfaces de usuário para controle dos veículos aéreos não tripulados. Esses dois levantamentos iniciais do domínio, permitiram a identificação de desafios e oportunidades para o desenvolvimento de novas aplicações para esse contexto. Com base no conhecimento adquirido com esses estudos, então, foi desenvolvido um sistema de entrega automatizada de objetos para o campus de Universidades, denominado de PostDrone University, e desenvolvido um veículo aéreo não tripulado para realizar as entregas, denominado de PostDrone University UAV K-263. O sistema possui uma interface de usuário de fácil uso, que não requer conhecimentos de domínios específicos como aviação ou controle de aeronaves para sua operação. Por fim, diversos testes foram realizados com o intuito de validar e identificar as limitações da solução desenvolvida nesta pesquisa.

12
  • FLADSON THIAGO OLIVEIRA GOMES
  • An approach to analyse code coverage in evolving scenarios

  • Advisor : UIRA KULESZA
  • COMMITTEE MEMBERS :
  • UIRA KULESZA
  • EDUARDO HENRIQUE DA SILVA ARANHA
  • CARLOS EDUARDO DA SILVA
  • ELDER JOSÉ REIOLI CIRILO
  • Data: Mar 3, 2016


  • Show Abstract
  • Nowadays, the test activties in the software development process
    became fundamental to ensure the reliability and quality of software
    systems code. The frequent evolution in the system's architecture and
    code creates strong challenges to the developers and testers since the
    modifications cannot behave as expected. In this context, there is a
    need of tools and mechanisms that decrease the negative impact
    generated by these frequent system evolution. There are a few existing
    tools that shows the execution flows of methods that are affected by
    the evolutions, however most of them do not show if those affected
    flows are or not covered by tests. This work presents an approach that
    has as the main goals: (i) to analyze the code coverage taking into
    consideration the existing system execution flows that were affected
    by system evolution; (ii) to indicate which system execution flows
    that have changed methods and are not covered by the existing
    automated tests and, therefore, could be considered to improve the
    system quality through the test regression; and (iii) to indicate if
    there is a test quality degradation in terms of non-covered execution
    flows. Our work has also performed an empirical study that analyzes 6
    open-source systems using the proposed approach. The study results
    identified that between 19% and 92% execution flows of those
    open-source systems are affected by code evolution changes and are not
    covered by automated tests. In addition, it was detected that 3 of the
    6 systems had a test quality degradation in terms of non-covered
    execution flows.

13
  • HULIANE MEDEIROS DA SILVA
  • Cluster Ensembles Optimization Using Coral Reefs Optimization Algorithm

  • Advisor : ANNE MAGALY DE PAULA CANUTO
  • COMMITTEE MEMBERS :
  • ANNE MAGALY DE PAULA CANUTO
  • BRUNO MOTTA DE CARVALHO
  • FLAVIUS DA LUZ E GORGONIO
  • JOAO CARLOS XAVIER JUNIOR
  • ARAKEN DE MEDEIROS SANTOS
  • Data: Mar 4, 2016


  • Show Abstract
  • O presente trabalho está inserido na linha de pesquisa de aprendizado de máquina, que é um campo de pesquisa associado à Inteligência Artificial e dedicado ao desenvolvimento de técnicas que, permitem ao computador aprender com experiências passadas. Em aprendizado de máquina, há diferentes tarefas de aprendizado que pertencem a determinado paradigma de aprendizado, entre elas podemos citar agrupamento de dados, que pertencente ao paradigma de aprendizado não supervisionado. Diversos algoritmos de agrupamento vêm sendo utilizados com sucesso em diferentes aplicações. No entanto, cada algoritmo possui suas próprias características e limitações, que podem gerar diferentes soluções para um mesmo conjunto de dados. Dessa forma, combinar vários métodos de agrupamento (comitês de agrupamento), capaz de aproveitar as características de cada algoritmo é uma abordagem bastante utilizada na tentativa de superar as limitações de cada técnica de agrupamento. Nesse contexto, diversas abordagens têm sido propostas na literatura no intuito de otimizar, ou seja, de melhorar cada vez mais as soluções encontradas. Dessa forma, o objetivo deste trabalho é propor uma abordagem para otimização de comitês de agrupamento, por meio da função consenso, utilizando técnicas inspiradas na natureza. Essa abordagem consiste na formação de um comitê de agrupamento heterogêneo, de modo que as partições iniciais são combinadas por um método que utilizada o algoritmo de otimização Coral Reefs Optimization com o método de co-associação, resultando em uma partição final. Essa estratégia é avaliada através dos índices de avaliação de agrupamento, Dunn, Calinski-Harabasz, Dom e Jaccard, no intuito de analisar a viabilidade da abordagem proposta. Finalmente, o desempenho da abordagem proposta é comparado com duas outras abordagens, são elas:  algoritmo genético com o método de co-associação e o método de co-associação tradicional. Essa comparação é feita através da utilização de testes estatísticos, especificamente teste de Friedman. 

14
  • PORFÍRIO DANTAS GOMES
  • Um Serviço de Descoberta Ciente de Contexto para Internet das Coisas

  • Advisor : THAIS VASCONCELOS BATISTA
  • COMMITTEE MEMBERS :
  • FLAVIA COIMBRA DELICATO
  • GIBEON SOARES DE AQUINO JUNIOR
  • PAULO DE FIGUEIREDO PIRES
  • THAIS VASCONCELOS BATISTA
  • Data: Apr 4, 2016


  • Show Abstract
  • A Internet das Coisas (do inglês Internet of Things - IoT) é um paradigma emergente caracterizado por uma miríade de dispositivos heterogêneos conectados à Internet. Porém, a alta heterogeneidade e a larga distribuição dos dispositivos disponíveis em IoT dificultam a implantação desse paradigma, fazendo com que máquinas e usuários enfrentem desafios para encontrar, selecionar, e usar recursos de forma rápida, confiável, e amigável. Nesse contexto, serviços de descoberta desempenham um papel chave, permitindo que clientes (e.g., plataformas de middleware, usuários finais, e aplicações) recuperem recursos através da especificação de critérios de busca contendo uma série de atributos, tais como o tipo do recurso, capacidades, localização, parâmetros de qualidade de contexto (Quality of Context - QoC), etc. Esta dissertação introduz o QoDisco, um serviço de descoberta distribuído que permite buscas por múltiplos atributos, buscas em intervalos, e operações de busca síncrona e assíncrona. Além disso, o QoDisco inclui um modelo baseado em ontologias para a descrição semântica de recursos (i.e., sensores e atuadores), serviços, e dados capturados por sensores. Esta dissertação apresenta, em detalhes, (i) a arquitetura do QoDisco, (ii) seu modelo de informação, (iii) a implementação de um protótipo, (iv) e a integração do QoDisco com uma plataforma de middleware para IoT, a EcoDiF. Por fim, este trabalho apresenta uma prova de conceito em um cenário de poluição urbana e uma avaliação qualitativa do desempenho do procedimento de busca do QoDisco.

15
  • VÍTOR ALCÂNTARA DE ALMEIDA
  • WPTrans: Um Assistente para Verificação de Programas em Frama-C.

  • Advisor : DAVID BORIS PAUL DEHARBE
  • COMMITTEE MEMBERS :
  • UMBERTO SOUZA DA COSTA
  • DAVID BORIS PAUL DEHARBE
  • PLACIDO ANTONIO DE SOUZA NETO
  • RICHARD WALTER ALAIN BONICHON
  • Data: Apr 29, 2016


  • Show Abstract
  • A presente dissertação descreve uma extensão para a plataforma Frama-C e o plugin WP:
    o WPTrans. Essa extensão permite a manipulação, através de regras de inferência, das
    obrigações de prova geradas pelo WP, com a possibilidade das mesmas serem enviadas,
    em qualquer etapa da modificação, a solucionadores SMT e assistentes de prova. Algumas
    obrigações de prova podem ser validadas automaticamente, enquanto outras são muito
    complexas para os solucionadores SMT, exigindo uma prova manual pelo desenvolvedor,
    através dos assistentes de prova. Contudo, a segunda abordagem geralmente requer do
    usuário uma experiência significativa em estratégias de prova. Alguns assistentes oferecem
    comunicação com provadores automáticos, entretanto, esta ligação pode ser complexa
    ou incompleta, restando ao usuário apenas a prova manual. O objetivo deste plugin é interligar
    os dois tipos de ferramentas de modo preciso e completo, com uma linguagem
    simples para a manipulação. Assim, o usuário pode simplificar suficientemente as obrigações de
    prova para que possam ser validadas por qualquer outro solucionador SMT.
    Não obstante, a extensão é interligada diretamente ao WP, facilitando a instalação do
    plugin no Frama-C. Esta extensão também é uma porta de entrada para outras possíveis
    funcionalidades, sendo as mesmas discutidas neste documento.

16
  • THOMAS FILIPE DA SILVA DINIZ
  • Self-adaptive Authorisation in Cloud-based Systems

  • Advisor : NELIO ALESSANDRO AZEVEDO CACHO
  • COMMITTEE MEMBERS :
  • NELIO ALESSANDRO AZEVEDO CACHO
  • THAIS VASCONCELOS BATISTA
  • CARLOS EDUARDO DA SILVA
  • CARLOS ANDRE GUIMARÃES FERRAZ
  • Data: May 2, 2016


  • Show Abstract
  • Although major advances have been made in protection of cloud platforms against malicious attacks, little has been done regarding the protection of these platforms against insider threats.
    This paper looks into this challenge by introducing self-adaptation as a mechanism to handle insider threats in cloud platforms, and this will be demonstrated in the context of OpenStack authorisation.
    OpenStack is a popular cloud platform that relies on Keystone, its identity management component, for controlling access to its resources.
    The use of self-adaptation for handling insider threats has been motivated by the fact that self-adaptation has been shown to be quite effective in dealing with uncertainty in a wide range of applications.
    Malicious insider attacks have become a major cause for concern since legitimate, though malicious, users might have access, in case of theft, to a large amount of information.
    The key contribution of this paper is the definition of an architectural solution that incorporates self-adaptation into OpenStack in order to deal with insider threats.
    For that, we have identified and analysed several insider threats scenarios in the context of the OpenStack cloud platform, and have developed a prototype that was used for experimenting and evaluating the impact of these scenarios upon the self-adaptive authorisation system for the cloud platforms.

17
  • ADDSON ARAUJO DA COSTA
  • SALSA - A Simple Automatic Lung Segmentation Algorithm

  • Advisor : BRUNO MOTTA DE CARVALHO
  • COMMITTEE MEMBERS :
  • BRUNO MOTTA DE CARVALHO
  • ANNE MAGALY DE PAULA CANUTO
  • RAFAEL BESERRA GOMES
  • WILFREDO BLANCO FIGUEROLA
  • Data: Jul 21, 2016


  • Show Abstract
  • The accurate segmentation of pulmonary tissue is of great importance for several diagnostic tasks. A simple and fast algorithm for performing lung segmentation is proposed here. The method combines several simple image processing operations to achieve the final segmentation and can be divided into two problems. The fisrt is the lung segmentation, that identifies regions such as backgroung, trachea, vessels, and left and right lungs, and it is complicated by the presence of noise, artifacts, low contrast and diseases. The second is the lobe segmentation, where the left lung is divided into two lobes, the upper and lower lobes, and the right into three lobes, the upper, middle and lower lobes. This second problem is harder due to the fact that the membranes dividing the lobes, the pleurae, are very thin and are not clearly visualized in the computerized tomography exams, besides the possible occurence of lobectomies (surgical lobe removal), diseases that may degrade the image qulaity, or noise during the image acquisition. Both methods were developed in order to produce an authomatic method, and we have already produced good validated results in the first problem, using the testing methodology of the lung segmentation challenge LOLA11.

     

     

18
  • KELYSON NUNES DOS SANTOS
  • A Study Machine Learning Techniques for Event Prediction through Neurophysiological data: an Epilepsy case study

  • Advisor : AUGUSTO JOSE VENANCIO NETO
  • COMMITTEE MEMBERS :
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • AUGUSTO JOSE VENANCIO NETO
  • FABRICIO LIMA BRASIL
  • RENAN CIPRIANO MOIOLI
  • Data: Jul 28, 2016


  • Show Abstract
  • Event prediction from neurophysiological data has many variables which must be analyzed in different moments, since data acquisition and registry to its post-processing. Hence, choosing the algorithm that will process these data is a very important step, for processing time and accuracy of results are determinant factors for a diagnosis auxiliary tool. Tasks of classification and prediction also help in understanding brain cell's networks interactions. This work studies Data Mining techniques with different features to analyzing their impact on the task of event prediction from neurophysiological data and purposes use of ensembles to optimize the performance of event prediction task through computational low-cost techniques.

19
  • CAIO FREITAS DE OLIVEIRA
  • Models and Algorithms for the Resource Production Scheduling Problem on Real-time Strategy Games

  • Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
  • COMMITTEE MEMBERS :
  • CAROLINA DE PAULA ALMEIDA
  • ELIZABETH FERREIRA GOUVEA GOLDBARG
  • GIVANALDO ROCHA DE SOUZA
  • MARCO CESAR GOLDBARG
  • SILVIA MARIA DINIZ MONTEIRO MAIA
  • Data: Aug 5, 2016


  • Show Abstract
  • Real-time strategy (RTS) games hold many challenges in the creation of a game AI. One of those challenges is creating an effective plan for a given context. A game used as platform for experiments and competition of game AIs is StarCraft. Its game AIs have struggled to adapt and create good plans to counter the opponent strategy. In this paper, a new scheduling model is proposed to planning problems on RTS games. This model considers cyclic events and consists in solving a multi-objective problem that satisfies constraints imposed by the game. Resources, tasks and cyclic events that translate the game into an instance of the problem are considered. The initial state contains information about resources, uncompleted tasks and on-going events. The strategy defines which resources to maximize or minimize and which constraints are applied to the resources, as well as to the project horizon. Four multi-objective optimizers are investigated: NSGA-II and its knee variant, GRASP and Ant Colony. Experiments with cases based on real Starcraft problems are reported.

20
  • VALDIGLEIS DA SILVA COSTA
  • Fuzzy Linear Languages

  • Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
  • COMMITTEE MEMBERS :
  • ANDERSON PAIVA CRUZ
  • BENJAMIN RENE CALLEJAS BEDREGAL
  • REGIVAN HUGO NUNES SANTIAGO
  • RENATA HAX SANDER REISER
  • Data: Aug 5, 2016


  • Show Abstract
  • The formal languages were introduced in the late 50's, and from then it have been of  a great importance in computer science, especially for applications in lexical analysis and syntactic necessary during the development of compilers and also in grammatical inference techniques. The extended  Chomsky hierarchy, relates the mains classes of  formal languages in term of their reach. In addition, also is possible  stablish a relationship between the classes of the formal languages in the Chomsky hierarchy and formalisms such as state  machines  (or automata) and grammars. Among the languages classes in this hierarchy, the  linear languages class have at least four types of  "devices" (state machines) characterizing or representing them. Among them are the non-deterministic linear λ-automata  proposed by Bedregal. At the end of the 60s, Lee and Zadeh proposed the fuzzy languages  in an attempt to bridge the gap between formal and natural languages. In turn, Wee Fu in order to capture the notion of uncertainty during the process recognition of  string of a language, introduces the concept of fuzzy automata. As in classical theory, we can trace a relationship between the classes of fuzzy languages and fuzzy automata. However, different from the classical theory, until now, there is no  fuzzy automata model directly to compute just on the class the fuzzy linear languages, i.e. that relates to the fuzzy linear languages directly. Therefore, this work proposes to conduct a study on a fuzzy automata model, based on the non-deterministic linear λ-automata, which recognize the fuzzy linear languages. Besides that, as in the study of formal languages, the investigation on closure property of some operators on language classes is an important point, in this work we will also investigate which of the fuzzy operators (union, intersection, etc.) that are closed on the classes of fuzzy linear languages.

21
  • FRANCISCO DIOGO OLIVEIRA DE QUEIROZ
  • Analyzing the Exception Handling Code of Android Apps

  • Advisor : ROBERTA DE SOUZA COELHO
  • COMMITTEE MEMBERS :
  • FERNANDO MARQUES FIGUEIRA FILHO
  • NELIO ALESSANDRO AZEVEDO CACHO
  • ROBERTA DE SOUZA COELHO
  • RODRIGO BONIFACIO DE ALMEIDA
  • Data: Aug 17, 2016


  • Show Abstract
  • Along the recent years, we have witnessed an astonishing increase in the number mobile applications being developed and some of them becoming largely used. Such applications extend phones capabilities far beyond of the basic calls. In the same rate the number of a users increase, also increases the number of users affected by application faults and crashes. In this contexto, Android apps are becoming more and more popular. Th