|
Dissertations |
|
1
|
-
THIAGO NASCIMENTO DA SILVA
-
Nelson's logic S and its algebraic semantics
-
Advisor : UMBERTO RIVIECCIO
-
COMMITTEE MEMBERS :
-
JOAO MARCOS DE ALMEIDA
-
UMBERTO RIVIECCIO
-
HUGO LUIZ MARIANO
-
Data: Jan 25, 2018
-
-
Show Abstract
-
Besides the better-known Nelson logic (N3) and paraconsistent Nelson logic (N4), David Nelson introduced, in the 1959 paper "Negation and separation of concepts in constructive systems”, with motivations of arithmetic and constructibility, a logic that he called “S”. In the present study, the logic is defined by means of a calculus (which crucially lacks the contraction rule) having infinitely many rule schemata, and no semantics is provided for it.
We look at the propositional fragment of S, showing that it is algebraizable (in fact, implicative) in the sense of Blok & Pigozzi with respect to a class of involutive residuated lattices. We thus provide the first known (algebraic) semantics for S as well as a Hilbert-style calculus equivalent to Nelson’s presentation. We also compare S with the other logics in the Nelson family N3 and N4.
|
|
2
|
-
BRENNER HUMBERTO OJEDA RIOS
-
Hybridization of Metaheuristics with Methods Based on Linear Programming for the Traveling Car Renter Salesman Problem
-
Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
MATHEUS DA SILVA MENEZES
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
Data: Feb 2, 2018
-
-
Show Abstract
-
The Traveling Car Renter Salesman Problem, or simply Traveling Car Renter Problem (CaRS), is a generalization of the Traveling Salesman Problem (TSP) where the tour can be decomposed into contiguous paths that are traveled by different rented cars. The objective is to construct a minimal cost Hamiltonian circuit, considering the penalty paid for changing cars in the tour. This penalty is the cost of returning a car to the city where it was rented. CaRS is classified as an NP-hard problem. This work studies the CaRS version classified as: complete, total, unrestricted, with no repetition, free and symmetric. This research is focused on hybrid procedures that combine metaheuristics and methods based on Linear Programming (LP). The following methods were investigated: scientific algorithms (ScA), evolutionary algorithms (EA), variable neighborhood descent (VND), adaptive local search (ASLP) and a new variant of ALSP called iterated adaptive local search (IALSP). The following techniques are proposed to deal with CaRS: ScA+ALSP, EA+IALSP, ScA+IALSP and ScA+VND+IALSP. A mixed integer programming model is proposed for CaRS which was used in the ALSP and IALSP. Non-parametric tests were used to compare the algorithms within a set of instances from the literature.
|
|
3
|
-
DIEGO DE AZEVEDO OLIVEIRA
-
BTestBox: a testing tool for B implementations
-
Advisor : DAVID BORIS PAUL DEHARBE
-
COMMITTEE MEMBERS :
-
MARCEL VINICIUS MEDEIROS OLIVEIRA
-
DAVID BORIS PAUL DEHARBE
-
VALÉRIO GUTEMBERG DE MEDEIROS JUNIOR
-
Data: Feb 5, 2018
-
-
Show Abstract
-
Software needs to be safe and correct. From that assumption, new technologies and techniques are developed to prove the competencies of a program. This safety necessity is more relevant when considering critical systems, such as railways and avionics systems. The use of formal methods in the construction of software tries to solve this problem. When using B in Atelier-B, after proving the components of a project is necessary to translate to the desired language. This translation occurs using B translators and compilers. Usually, the process of compilation is safe when done by mature compilers, although they are not free of errors and eventually bugs are found. Expanding this affirmation to B translators demands caution, since they are not used such as compilers that have more market time. Software testing may solve and be used to perform the analyses of the translated code. Through coverage criteria is possible to infer the level of quality of a software and detect bugs. To achieve the coverage check and test the software is hard and time-consuming, mainly if done manually. To adress this demand, the BTestBox tool aims to analyze, automatically, the coverage reached for B implementations built through Atelier-B. BTestBox also automatically tests the translation from B implementations. For this, BTestBox uses the same test case generated to verify the coverage and compare the output expected values with the values found in the translation. This process made by BTestBox is fully automatic and may be used from Atelier-B interface through a plugin with easy interface. This thesis proposal presents the tool BTestBox. The tool is the implementation of the ideas proposed in the previous paragraph. BTestBox was tested with small B implementations with all possible elements from B language. BTestBox presents various functionalities and advantages to developers that use the B-Method.
|
|
4
|
-
RENAN DE OLIVEIRA SILVA
-
A Proposal for a Process for Deploying Open Data in Brazilian Public Institutions
-
Advisor : GIBEON SOARES DE AQUINO JUNIOR
-
COMMITTEE MEMBERS :
-
FERNANDO MARQUES FIGUEIRA FILHO
-
GIBEON SOARES DE AQUINO JUNIOR
-
VANILSON ANDRÉ DE ARRUDA BURÉGIO
-
Data: Feb 20, 2018
-
-
Show Abstract
-
Open Data initiative has been gaining strength in recent times, with increasing participation of public institutions. However, there are still many challenges that need to be overcome when deciding to open data. This negatively affects the quality and effectiveness of publications. Therefore, the objective of this work is to establish a process that help brazilian public institutions to open their data, systematizing the necessary tasks and phases. For this, we carried out a systematic mapping of the literature, in order to discover strategies, best practices, challenges and difficulties that exist in the field.
|
|
5
|
-
FRED DE CASTRO SANTOS
-
A mechanism to evaluate context-free queries inspired in LR(1) parsers over graph databases
-
Advisor : UMBERTO SOUZA DA COSTA
-
COMMITTEE MEMBERS :
-
MARCEL VINICIUS MEDEIROS OLIVEIRA
-
MARIZA ANDRADE DA SILVA BIGONHA
-
MARTIN ALEJANDRO MUSICANTE
-
SERGIO QUEIROZ DE MEDEIROS
-
UMBERTO SOUZA DA COSTA
-
Data: Feb 23, 2018
-
-
Show Abstract
-
The World Wide Web is an always increasing collection of information. This information is spread among different documents, which are made available by using the Hypertext Transfer Protocol (HTTP). Even though this information is accessible to users in the form of news articles, audio broadcasts, images and videos, software agents often cannot classify it. The lack of semantic information about these documents in a machine readable format often causes the analysis to be inaccurate. A significant number of entities have adopted Linked Data as a way to add semantic information to their data, not just publishing it on the Web. The result is a global data collection, called the Web of Data, which forms a global graph, consisting of Resource Description Framework (RDF) statements from numerous sources, covering all sorts of topics. To be able to find specific information in this graph, queries are performed by starting at a subject and analyzing its predicates in the RDF statements. Given that a trace is a list of predicates in an information path, one can tell there is a connection between one subject and one object if there is a trace between them in the RDF statements.
The use of HTTP as a standardized data access mechanism and RDF as a standard data model simplifies the data access, but accessing heterogeneous data on distinct loca- tions can have an increased time complexity and current query languages have a reduced query expressiveness, which motivates us to research alternatives in how this data is queried. This reduced expressiveness happens because most query languages reside in the Regular Languages class. In this work, we introduce some of the concepts needed for better understanding the given problems and how to solve them. We analyze some works related to our research and propose to use Deterministic Context-Free Grammars instead of Regular languages to increase the expressiveness of the graph database queries. More specifically, applying the LR(1) parsing method to find paths in an RDF graph database. Lastly, we analyze our algorithm’s complexity and make some experiments, comparing our solution to other proposals, and show that ours can have better performance in given scenarios.
|
|
6
|
-
CIRO MORAIS MEDEIROS
-
Top-Down Evaluation of Context-Free Path Queries in Graph Databases
-
Advisor : MARTIN ALEJANDRO MUSICANTE
-
COMMITTEE MEMBERS :
-
MARTIN ALEJANDRO MUSICANTE
-
MARCEL VINICIUS MEDEIROS OLIVEIRA
-
UMBERTO SOUZA DA COSTA
-
SERGIO QUEIROZ DE MEDEIROS
-
MARIZA ANDRADE DA SILVA BIGONHA
-
Data: Feb 23, 2018
-
-
Show Abstract
-
The internet has enabled the creation of an immense global data space, that can be accessed in the form of web pages. However, web pages are ideal for presenting content to human beings, but not to be interpreted by machines. In addition, it becomes difficult to relate the information stored in the databases behind these pages. From this came the Linked Data, a set of good practices for relating and publishing data data.
The standard format recommended by Linked Data for storing and publishing related data is RDF. This format uses triples in the form (subject, predicate, object) to stabilish relationships between the data. A triplestore can be easily visualized as a graph, so queries are made by defining paths in the graph. SPARQL, the standard query language for RDF graphs, supports the definition of paths using regular expressions. However, regular expressions have reduced expressiveness, insufficient for some desirable queries. In order to overcome this problem, some studies have proposed the use of context-free grammars to define the paths.
We present an algorithm for evaluating context-free path queries in graphs inspired by top-down parsing techniques. Given a graph and a query defined over a context-free grammar, our algorithm identifies pairs of vertices linked by paths that form words of the language generated by the grammar. We show that our algorithm is correct and demonstrate other important properties of it. It presents cubic worst-case runtime complexity in terms of the number of vertices in the graph. We implemented the proposed algorithm and evaluated its performance with RDF databases and synthetic graphs to confirm its efficiency.
|
|
7
|
-
ANDERSON PABLO NASCIMENTO DA SILVA
-
A Monitoring Platform for Heart Arrhythmia in Real-time Flows
-
Advisor : GIBEON SOARES DE AQUINO JUNIOR
-
COMMITTEE MEMBERS :
-
FERNANDO ANTONIO MOTA TRINTA
-
GIBEON SOARES DE AQUINO JUNIOR
-
JOAO CARLOS XAVIER JUNIOR
-
THAIS VASCONCELOS BATISTA
-
Data: Feb 27, 2018
-
-
Show Abstract
-
In the last decade, there has been a rapid growth in the ability of computer systems to collect and carry large amounts of data. Scientists and engineers who collect this data have often turned to machine learning to nd solutions to the problem of turning that data into information. For example, in various medical devices, such as the availability of health monitoring systems, drug boxes with sensors embedded in them that allow you to collect raw data, store and analyze, and through the analysis you can get insights and decisions on such data sets. With the use of health applications based on machine learning, there is an opportunity to improve the quality and efficiency of medical care and, consequently, improve the wellness of patients. Thus, this work has as general objective the construction of an intelligent cardiac arrhythmia monitoring platform that allows monitoring, identifying and alerting health professionals, patients and relatives in real time about the hospitalized patient's health. The architecture and implementation of the platform were based on the Weka API and, as part of this work, a proof of concept of the use of the platform involving modules and applications developed in Java was implemented.
|
|
8
|
-
ALTAIR BRANDÃO MENDES
-
Mandala - SoS-based interoperability in smart cities
-
Advisor : THAIS VASCONCELOS BATISTA
-
COMMITTEE MEMBERS :
-
ELISA YUMI NAKAGAWA
-
FREDERICO ARAUJO DA SILVA LOPES
-
GIBEON SOARES DE AQUINO JUNIOR
-
THAIS VASCONCELOS BATISTA
-
Data: Feb 28, 2018
-
-
Show Abstract
-
Currently, the cities depend considerable on information systems. A large part of these systems, regardless of the form of public or private management, was developed using technologies and concepts that are now considered outdated. Moreover, because they were not designed to communicate with other systems in an interoperable way, many of these systems in the cities are isolated and non-standardized solutions. In contrast, the dynamism demanded by companies, government and, mainly, population presupposes the union of these systems, working in an integrated and interoperable way. This interoperability is critical to achieving the efficiency and effectiveness of the use of expected resources in an smart city. Furthermore, the union between these systems can bring previously unimaginable results, when compared to the results acquired by each isolated system. These characteristics refer to the concept of System of Systems, which is a set of complex, independent, heterogeneous systems that have their own purposes and collaborate with others to achieve common goals. The interaction between different systems made possible by a SoS is more than the sum of the systems involved, since it allows a SoS to offer new functionalities that are not provided by any of the systems operating alone. Based on the above mentioned characteristics, this paper proposes Mandala, a SoS-centric middleware that enables interoperability between information systems in smart cities. The goal is to make the heterogeneity of the systems involved transparent, providing an environment of integration and interoperation of information systems.
|
|
9
|
-
FÁBIO PHILLIP ROCHA MARQUES
-
Dos Alfabetos ao Exame de Proficiência: Revisão Sistemática de Aplicativos para Ensino e Revisão da Língua Japonesa
-
Advisor : LEONARDO CUNHA DE MIRANDA
-
COMMITTEE MEMBERS :
-
ANDRE MAURICIO CUNHA CAMPOS
-
LEONARDO CUNHA DE MIRANDA
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
ROMMEL WLADIMIR DE LIMA
-
Data: May 28, 2018
-
-
Show Abstract
-
Japanese is a language with writing, vocabulary, grammar and pronunciation quite different from western languages, because it contains three alphabets (with two syllabic alphabets and the third one logographic), contains vocabulary, orthography and phonology built upon different nations and even has a grammar with many rules and forms, which may even differ according with the degree of formality between the listener and the speaker. Therefore, studying Japanese requires a lot of dedication and practice. To support the study of the language, more than 3100 applications are available in virtual stores with the intention of supporting students in learning and revising the Japanese alphabet, vocabulary, grammar and listening comprehension, as well as preparing for the Japanese Language Proficiency Test (JLPT). However, little has been investigated about the contents, teaching and reviewing methodology and technological features of these applications. This research aims to systematically review applications focused on supporting Japanese language study, based on a proposed framework for qualitative and quantitative review of language learning software. An individual evaluation is executed for each part of the language, starting with the alphabet, proceeding with vocabulary, grammar and listening comprehension, in order to study the applications of each component of the Japanese language; and finishing with the analysis of applications geared towards JLPT preparation, since there are applications with content and presentation adjusted specifically for the exam. Research findings are presented and include details of the main features of applications in the current scenario, a classification and comparison of the most recommended applications for the Android and iOS mobile platforms, comparison between Android and iOS platform apps in relation to the support provided to the studies and a study of features that do not usually appear in current applications but are very important for helping study Japanese nonetheless.
|
|
10
|
-
ISLAME FELIPE DA COSTA FERNANDES
-
Hybrid Metaheuristics Applied to the Multi-objective Spanning Tree Problem
-
Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
THATIANA CUNHA NAVARRO DE SOUZA
-
Data: Jul 6, 2018
-
-
Show Abstract
-
The Multi-objective Spanning Tree Problem (MSTP) is an NP-hard extension of the Minimum Spanning Tree (MST). Once the MTSP models several real-world problems in which conflicting objectives need to be optimized simultaneously, it has been extensively studied in the literature and several exact and heuristic algorithms were proposed for it. Besides, over the last years, researchs have showed the considerable performance of algorithms that combine various metaheuristic strategies. They are called hybrid algorithms and previous works successfully applied them to several optimization problems. In this work, five new hybrid algorithms are proposed for two versions of the MSTP: three for the bi-objective version (BiST) based on Pareto dominance and two for the many-objective version based on the ordered weighted average operator (OWA-ST). This research hybridized elements from various metaheuristics. Computational experiments investigated the potential of the new algorithms concerning computational time and solution quality. The results were compared to the state-of-the-art.
|
|
11
|
-
JÉSSICA LAÍSA DIAS DA SILVA
-
Game Design of Computational Thinking Games inspired by the Bebras Challenge Evaluation Instrument
-
Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
-
COMMITTEE MEMBERS :
-
EDUARDO HENRIQUE DA SILVA ARANHA
-
JACQUES DUÍLIO BRANCHER
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
Data: Jul 25, 2018
-
-
Show Abstract
-
Several skills are required in this century, among them, computer-related skills. As Blinkstein (2008) states, the list of skills required for this century is quite extensive. However, he emphasizes computational thinking as being one of the most significant as well as least understood. Computational Thinking (PC) can be defined as a problem-solving process that encompasses concepts, skills, and practices in Computer Science. Among the international effort to disseminate Computational Thinking we have highlighted the Bebras Test. The main goal of the Test is to motivate primary and secondary school students as well as the general public to become interested in computing and the PC. The teaching of Computational Thinking is important if it is widespread, but it is observed that much is still lacking in the digital games to work the proposed skills for the teaching and learning of the PC. Thus the present work aims to investigate the quality of Game Design of educational games created from questions of the Bebras Challenge Test.
|
|
12
|
-
WENDELL OLIVEIRA DE ARAÚJO
-
Procedural Content Generation for Creating Levels of Educational Games
-
Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
-
COMMITTEE MEMBERS :
-
EDUARDO HENRIQUE DA SILVA ARANHA
-
JACQUES DUÍLIO BRANCHER
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
Data: Jul 25, 2018
-
-
Show Abstract
-
Educational digital games has been an area of research that has increased with the years throughout the international scene. This fact has been due to the potential that the games have of fun, immersion and stimulation to the learning of natural and personalized way. Thus, one of the great challenges found in this area is the creation of games that meet the contents proposed to be taught or practiced. In this sense, the procedural content generation has emerged as an area that can assist the development of educational games. The procedural content generation (PCG) deals with the automatic creation of contents such as textures, sounds, objects and, in the context of this work, the level’s generation. Thus, PCG contributes to the creation of new levels without the need for human intervention. With this, the research seeks to make use of PCG technique to create levels of educational games that require the player to achieve certain pedagogical goals throughout the game. For this, we propose a generation approach in three stages: (i) generation of the basic structure of the level (ex: only floors and walls); (ii) The generation of elements related to the pedagogical objectives of the level; (iii) complete the remainder of the level with enemies and other scenario elements. In this way, it can be used to create different challenges and scenarios so that a student can practice certain content, since whenever a challenge is completed, a new challenge can be generated for the student. In this way, this approach will be investigated using the grammar-based PCG technique. Therefore, we seek to verify if the technique in conjunction with the proposed approach assists in the generation of contents and creates them effectively, evaluating their quality and functionalities with elementary students.
|
|
13
|
-
RAFAEL FERREIRA TOLEDO
-
Recovery Mechanism Based on a Rewriting Process for Web Service Compositions
-
Advisor : UMBERTO SOUZA DA COSTA
-
COMMITTEE MEMBERS :
-
GENOVEVA VARGAS-SOLAR
-
MARTIN ALEJANDRO MUSICANTE
-
UMBERTO SOUZA DA COSTA
-
Data: Jul 26, 2018
-
-
Show Abstract
-
Web service compositions are exposed to a wide variety of failures. The service components re- motely located can represent potential problems due to the means of connectivity necessary for communication or because of changes implemented by their respective provider during system updates. Those problems represent unexpected events that compromise the correctness and availability of a given service composition. This dissertation presents an approach to improve the robustness of Web service compositions by recovering from failures occurred at different moments of their execution. We first present a taxonomy of failures as an overview of previous research works on the topic of fault recovery of service compositions. The resulting classifica- tion is used to propose our self-healing method for Web service orchestrations. The proposed method, based on the refinement process of compositions, takes user preferences into account to generate the best possible recovering compositions. To validate our approach, we produced a prototype implementation capable of simulating and analyzing different scenarios of faults. For that matter, our work introduces algorithms for generating synthetic compositions and Web services. In this setting, both the recovery time and the user preference degradation are investigated under different strategies, namely local, partial or total recovery. These strategies represent different levels of intervention on the composition.
|
|
14
|
-
GABRIEL DE ALMEIDA ARAÚJO
-
Interactive Platform of Velocity Analysis on Seismic Data
-
Advisor : BRUNO MOTTA DE CARVALHO
-
COMMITTEE MEMBERS :
-
BRUNO MOTTA DE CARVALHO
-
MONICA MAGALHAES PEREIRA
-
CARLOS CESAR NASCIMENTO DA SILVA
-
ARMANDO LOPES FARIAS
-
Data: Jul 27, 2018
-
-
Show Abstract
-
With the advancement of hydrocarbon exploration, the oil industry has been searching for ways to minimize exploratory risks, with one of these ways being the improvement of the used tools. There are three steps in this exploration: the seismic data acquisition, the seismic processing and seismic interpretation. This work is part of the seismic processing, more specifically of one of its stages, the seismic velocity analysis, which aims to find the seismic velocity field that offers reliable earth subsurface models through known algorithms of velocity analysis. One of the objectives of this work is the creation of tools to facilitate this velocity analysis by implementing these algorithms so that they work integrated in a single platform of analysis. Another point that this advance brought, was the considerable increase in the volume of seismic data acquired, which led to an increasing need of computer processing power. Given this need, we present a methodology for velocity analysis using GPUs and its results, showing the viability of using it to accelerate Geophysics algorithms, particularly algorithms for velocity analysis. Finally, case studies will be presented, showing the performance results of the algorithms in CPU and GPU versions.
|
|
15
|
-
FÁBIO ANDREWS ROCHA MARQUES
-
Development and Evaluation of Nihongo Kotoba Shiken: A Computerized Exam for the Japanese Language
-
Advisor : LEONARDO CUNHA DE MIRANDA
-
COMMITTEE MEMBERS :
-
ANDRE MAURICIO CUNHA CAMPOS
-
LEONARDO CUNHA DE MIRANDA
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
ROMMEL WLADIMIR DE LIMA
-
Data: Jul 27, 2018
-
-
Show Abstract
-
The study of foreign languages involves the constant elaboration, application and correction of exams. In this context, the use of computerized tests has facilitated these tasks, but there are some limitations. By conducting studies in the areas of foreign language knowledge assessment and automation of this activity, the present research aims to develop a method to automate knowledge assessment in the Japanese language that does not require full interaction with a professional teacher of the language and which is not limited to a fixed content of the language, i.e. the content of the test must be modifiable. This work will present the research stages about the study and evaluation of Japanese language knowledge through the technology, the design of the evaluation methodology used in the exam, the flow of execution and characteristics of the Nihongo Kotoba Shiken, and assessments with a professional of the language and some Japanese language learning classes.
|
|
16
|
-
GABRIELA OLIVEIRA DA TRINDADE
-
Visualization of Traceability in Agile Projects through Data contained in Tools of Support to the Management of Projects
-
Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
COMMITTEE MEMBERS :
-
GILBERTO AMADO DE AZEVEDO CYSNEIROS FILHO
-
LYRENE FERNANDES DA SILVA
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
Data: Jul 27, 2018
-
-
Show Abstract
-
Software traceability, known for the relationship between any software engineering artifacts, brings great advantages to the development process. The information it provides helps in decision making in the face of a change, better understanding of the artifact, reusability, maintenance, forecasting of costs and deadlines, among others. With environments increasingly adept at agile methodologies, with client side-by-side giving constant feedbacks, a adequacy to these requested changes has been a common practice during system development. And, in order for changes to be made safely, traceability information helps in making decisions, with a goal in which the change does not bring inconsistencies, introduce errors, and generate system failures. Some project management tools support traceability elements. However, with several data that can be provided with such a practice, it is difficult to interpret them, especially when they are presented only textually. Knowing that the visualization of information brings the possibility of an analysis with large volumes of data in a fast and clear way, offering a safer decision making and, allowing to discover information previously unseen, it is possible to verify in the literature techniques of visualization of information of traceability. However, such techniques require information in addition to these data, need to consider the pillars of information exposed in the academy (problematic, what, when and who to view) to have an adequate visualization. With this purpose, this work performs interviews in the industry to respond to the pillars of information considered in the proposal of a visualization. Then, an analysis based on Grounded Theory is done on the data collected. Then, in the assembled context of traceability, defined profiles, needs and problems described, and artifacts generated in agile environments, the existing information visualizations are studied in the bibliography. As a result, a discussion and suggestion of appropriate visualization for traceability information are made based on the suggestions in the literature and data collected from the interview. Later, with Heuristics created, an evaluation of the project management tools that integrate the platform of hosting and versioning of data Github is made, to see if they provide the noticed visualization of traceability information.
|
|
17
|
-
FRANCISCO GENIVAN SILVA
-
Analysis of Student Behavior in Video Lessons
-
Advisor : EDUARDO HENRIQUE DA SILVA ARANHA
-
COMMITTEE MEMBERS :
-
EDUARDO HENRIQUE DA SILVA ARANHA
-
FERNANDO MARQUES FIGUEIRA FILHO
-
ISABEL DILLMANN NUNES
-
FABIANO AZEVEDO DORÇA
-
Data: Jul 27, 2018
-
-
Show Abstract
-
Distance Education and the use of e-learning systems contribute to the great generation of educational data. Therefore, the use of databases and the storage of execution logs make the data more easily accessible and suitable for investigation of educational processes. Methodologies for automatic extraction of useful information from large volumes of data, especially data mining, have significantly contributed to improvements in the field of education. However, most traditional methods are focused solely on the data or how they are structured, with no major concern with the educational process as a whole. In addition, little attention has been paid to data on student behavior during resource use and educational media. Video lessons have been used as a significant part of several courses offered, demonstrating that the culture of video is increasingly disseminated and is part of students' daily lives. Therefore, we understand that analyzing the behavior of students during the execution of the videos can contribute to a more accurate evaluation of the quality of the subjects addressed and the way they were worked. Thus, this master's work consisted of carrying out studies conducted in order to investigate the way students behave during the use of video lessons to propose an approach to evaluate this resource. The evaluation of video lessons occurs through a process that involves extracting information from log files and modeling actions through process mining. The initial results demonstrate that the number of views, the time spent and the time of drop out of the video are variables that have great capacity to offer useful information about the students' learning. This demonstrates that evaluating the educational resource through the analysis of its actions can contribute substantially in the educational area, benefiting the treatment of issues such as the identification of bottlenecks in the learning process and the anticipation of problems, especially in distance education. The results obtained during the first studies using Process Mining in experimental data provided greater clarity about students' behavior during video lessons, giving the necessary direction for the actions to be taken by teachers or content producers. In view of this, the work brings contributions to the improvement of key aspects of videotapes from a multidisciplinary approach, directly helping educators and managers to promote a more complete educational formation based on resources with better quality.
|
|
18
|
-
DANNYLO JOHNATHAN BERNARDINO EGÍDIO
-
A framework proposal to facilitate the development of IoT-based applications
-
Advisor : GIBEON SOARES DE AQUINO JUNIOR
-
COMMITTEE MEMBERS :
-
GIBEON SOARES DE AQUINO JUNIOR
-
EDUARDO HENRIQUE DA SILVA ARANHA
-
DIEGO RODRIGO CABRAL SILVA
-
KIEV SANTOS DA GAMA
-
Data: Jul 30, 2018
-
-
Show Abstract
-
Recent years have been marked by a growing advance in embedded computing, sensoring
technologies and connected devices. Such an advance has had a signicant and expressive
impact on innovative paradigms such as the Internet of Things (IoT), which believes that
intelligent objects capable of connecting in the network can cooperate among each other
to achieve a common goal. Such growth has leveraged supplier initiatives to produce
protocols and communication standards that would enable such cooperation, however,
the considerable diversity of devices and consequently protocols that have emerged have
made this process dicult, creating numerous challenges, including heterogeneity and
interoperability. These challenges have made the IoT application development process a
complex and costly task, since the capabilities of these protocols and standards aimed at
discovering the devices on the network, communication among them, have become quite
specic for each device, forcing the developer to create complex integration strategies to
deal with this limitation. In this way, this work proposes a textit framework that will seek
to simplify the process of development of IoT applications through device virtualization, so
that heterogeneous aspects connected to devices will be abstracted by this virtualization,
and common operations of protocols such as discovery of devices and communication with
them will be abstracted through a common interface between them, integrating them and
reducing the impacts of the heterogeneous characteristics.
|
|
19
|
-
ERITON DE BARROS FARIAS
-
Recommendations Catalog to Suport Agile Adoption or Transformation
-
Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
COMMITTEE MEMBERS :
-
FERNANDO MARQUES FIGUEIRA FILHO
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
MARILIA ARANHA FREIRE
-
Data: Jul 30, 2018
-
-
Show Abstract
-
The number of studies on agile methods has increased in the academy. Agile software development has a significant positive impact on the performance of the development teams, software quality and users' satisfaction. Thus, among other topics, Agile Adoption and Transformation are two of the most relevant themes in the main events about agile. Many teams that work with agile development report that they miss a tutorial or document, in which it is possible to find solutions to help agile teams carry out processes of Agile Transformation or Adoption easily. Therefore, this work has the objective of analyzing and categorizing information that can assist teams in these processes. The result of this analysis was organized in a catalog called Recommendations Catalog to Assist Agile Adoption or Transformation.
|
|
20
|
-
VINÍCIUS ARAÚJO PETCH
-
Profitable Tour Problem with Passengers and Time Constraints (PTP-TR)
-
Advisor : MARCO CESAR GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
MATHEUS DA SILVA MENEZES
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
Data: Aug 6, 2018
-
-
Show Abstract
-
This paper seeks to model and examine solutions to the Profitable Tour Problem with Passengers and Time Constraints (PTP-TR). The work proposes a mathematical model for the problem, exact solution algorithm and metaheuristics for the solution approximation. In order to operationalize the computational experiment necessary to the present research and because it is a model not described in the literature, test instances were also created. The work performs a computational experiment to evaluate the performance of mathematical modeling and delineate the ability to approximate metaheuristic algorithms for the problem. Finally, it describes the schedule for the masters defense and how the problem can be developed in future works.
|
|
21
|
-
LUCAS MARIANO GALDINO DE ALMEIDA
-
Mining Exceptional Interfaces based on GitHub: An Exploratory Study
-
Advisor : ROBERTA DE SOUZA COELHO
-
COMMITTEE MEMBERS :
-
ROBERTA DE SOUZA COELHO
-
UIRA KULESZA
-
EIJI ADACHI MEDEIROS BARBOSA
-
MARCELO DE ALMEIDA MAIA
-
Data: Aug 14, 2018
-
-
Show Abstract
-
Uncaught exceptions are not an exceptional scenario in current applications. The uncaught exceptions are estimated to account for two thirds of system crashes. Such exceptions can be thrown on the application itself, by the underlying system or hardware, or even by a reused API. More often than not, the documentation about the runtime exceptions signaled by API methods are absent or incomplete. As a consequence, the developer usually discovers about such exceptions when they happen in production environment - leading to application crashes. This work reports an exploratory study that mined the exception stack traces embedded on GitHub issues to discover the undocumented exception interfaces of API methods. Overall the issues of 2.970 java projects hosted in GitHub were mined and 66.118 stack traces were extracted. Hence, a set of top maven APIs where investigated using this stack traces data set, and undocumented exception interfaces could be discovered. The results of the mining study show that the information embedded on issues can indeed be used to discover undocumented exceptions thrown by API methods.
|
|
22
|
-
JOÃO CARLOS EPIFANIO DA SILVA
-
Investigation of Engineering Requirements Education from the Academy and Industry Perspective: Focus on Context Interpretation and Requirements Writing
-
Advisor : MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
COMMITTEE MEMBERS :
-
MARCIA JACYNTHA NUNES RODRIGUES LUCENA
-
LYRENE FERNANDES DA SILVA
-
ISABEL DILLMANN NUNES
-
MARIA LENCASTRE PINHEIRO DE MENEZES E CRUZ
-
Data: Aug 15, 2018
-
-
Show Abstract
-
In the literature, many problems are pointed out regarding the process of Requirements Engineering. Recent research shows that software development environments face many challenges ranging from requirement elicitation to validation. The challenges listed in the literature are part of topics within the academy in the lecture of Requirements Engineering. Those challenges impact on product quality and may compromise the continuity of a project. Therefore, we believe that maybe there is a deficit in the teaching of the lecture that impacts on the industry, besides a possible lack of parallelism in both contexts. Concerning that scenario, this work lists methodologies and activities that change the traditional method of teaching related to Requirements Engineering. The activities focus on interpreting solutions and writing requirements. For that, it was necessary to perform a systematic review of the literature in order to identify how the lecture is taught. Besides that, we did a survey directed to professors and industry aiming to identify the state of the lecture and difficulties within the area in the country. It was verified that professors and industry face many challenges. The industry challenges may be a consequence of academy teaching. It is necessary to get to know the challenges before they impact on the job market, which means that they need to be identified already in the academy. From the results that we got, it was concluded that, indeed, it is essential to overcome the challenges presented still in the academy. There is Also a need for more practical activities and new approaches in the classroom. On the other hand, in the industry, we recommend that they collaborate with the academy. In this way, once the industry demands are identified, the academy can provide, for the future professionals, a formation based on expected skills.
|
|
23
|
-
JEFFERSON IGOR DUARTE SILVA
-
An AI based Tool for Networks-on-Chip Design Space Exploration
-
Advisor : MARCIO EDUARDO KREUTZ
-
COMMITTEE MEMBERS :
-
DEBORA DA SILVA MOTTA MATOS
-
MARCIO EDUARDO KREUTZ
-
MONICA MAGALHAES PEREIRA
-
Data: Aug 29, 2018
-
-
Show Abstract
-
With the increasing number of cores in Systems on Chip (SoCs), bus architectures have suffer some limitations regarding performance. As applications demand more bandwidth and lower latencies, busses could not comply with such requirements due to longer wires and increased capacitancies. Facing this scenario, Networks-on-Chip (NoCs) emerged as a way to overcome limitations found in bus-based systems. NoCs are composed of a set routers and communication links. Each component has its own characteristics. Fully ex- ploring all possible NoC characteristics settings is unfeasible due to the huge design space to cover. Therefore, some methods to speed up this process are needed. In this work we propose the usage of Artificial Intelligence techniques to optimize NoC architectures. This is accomplished by developing an AI based tool to explore the design space in terms of latency prediction for different NoC components configuration. Up to now, nine classifiers were evaluated. To evaluate the tool tests were performed on Audio/Video applications with two traffic patterns, Perfect Shuffle and Matrix Transpose, with four different com- munication requirements. The preliminaries results show an accuracy up to 85% using a Decision Tree to predict latency values.
|
|
24
|
-
JHOSEPH KELVIN LOPES DE JESUS
-
Information Theory Approaches for Automated Feature Selction
-
Advisor : ANNE MAGALY DE PAULA CANUTO
-
COMMITTEE MEMBERS :
-
ANNE MAGALY DE PAULA CANUTO
-
BENJAMIN RENE CALLEJAS BEDREGAL
-
DANIEL SABINO AMORIM DE ARAUJO
-
ANDRÉ CARLOS PONCE DE LEON FERREIRA DE CARVALHO
-
Data: Sep 21, 2018
-
-
Show Abstract
-
One of the main problems of machine learning algorithms is the dimensionality problem. With the rapid growth of complex data in real-world scenarios, attribute selection becomes a mandatory pre-processing step in any application to reduce data complexity and computational time. Based on this, several works were produced to develop efficient methods to accomplish this task. Most attribute selection methods select the best attributes based on some specific criteria. In addition, recent studies have successfully constructed models to select attributes considering the particularities of the data, assuming that similar samples should be treated separately. Although some progress has been made, a poor choice of a single algorithm or criterion to assess the importance of attributes, and the arbitrary choice of attribute numbers made by the user can lead to poor analysis. In order to overcome some of these issues, this paper presents the development of some two strands of automated attribute selection approaches. The first are fusion methods of multiple attribute selection algorithms, which use ranking-based strategies and classifier committees to combine attribute selection algorithms in terms of data (Data Fusion) and decision (Fusion Decision) algorithms, allowing researchers to consider different perspectives in the attribute selection step. The second method (PF-DFS) brings an improvement of a dynamic selection algorithm (DFS) using the idea of Pareto frontier multiobjective optimization, which allows us to consider different perspectives of the relevance of the attributes and to automatically define the number of attributes to select . The proposed approaches were tested using more than 15 actual and artificial databases and the results showed that when compared to individual selection methods such as the original DFS itself, the performance of one of the proposed methods is notably higher. In fact, the results are promising since the proposed approaches have also achieved superior performance when compared to established dimensionality reduction methods, and by using the original data sets, showing that the reduction of noisy and / or redundant attributes may have a positive effect on the performance of classification tasks.
|
|
25
|
-
SAMUEL DA SILVA OLIVEIRA
-
Optimization of Irregular NoC Topology for Real-Time and Non-Real-Time Applications in Networks-on-Chip based MP-SoCs.
-
Advisor : MARCIO EDUARDO KREUTZ
-
COMMITTEE MEMBERS :
-
MARCIO EDUARDO KREUTZ
-
MONICA MAGALHAES PEREIRA
-
GUSTAVO GIRAO BARRETO DA SILVA
-
ALISSON VASCONCELOS DE BRITO
-
Data: Dec 7, 2018
-
-
Show Abstract
-
With the evolution of multiprocessing architectures, Networks-on-Chip (NoCs) have become a viable solution for the communication subsystem. Since there are many possible architectural implementations, some use regular topologies, which are more common and easier to design. Others however, follow irregularities in the communication pattern, turning into irregular topologies. A good design space exploration can give us the configuration with better performance among all architectural possibilities. This work proposes a network with optimized irregular topology, where the communication is based on routing tables and a tool that seeks to perform this exploration through a Genetic Algorithm. The network proposed in this work presents heterogeneous routers (which can help with network optimization) and supports real-time and non real- time packets. The goal of this work is to find a network (or a set of networks), through the design space exploration, that has the best average latency and the highest percentage of packets that meet their deadlines.
|
|
26
|
-
SAMUEL DE MEDEIROS QUEIROZ
-
INFRASTRUCTURE AS A SERVICE INTRA-PLATFORM INTEROPERABILITY: An Exploratory Study with OpenStack
-
Advisor : THAIS VASCONCELOS BATISTA
-
COMMITTEE MEMBERS :
-
ANDREY ELÍSIO MONTEIRO BRITO
-
JACQUES PHILIPPE SAUVÉ
-
NELIO ALESSANDRO AZEVEDO CACHO
-
THAIS VASCONCELOS BATISTA
-
Data: Dec 10, 2018
-
-
Show Abstract
-
The emergence of new digital technologies come with challenging technical and business requirements. The traditional approach to provide computational infrastructure to application workloads, which relies on in-house management of hardware, does not present technical and cost-effective attributes to deliver high-performance, reliability and scalability. As the biggest technologic paradigm shift in the history of humanity, cloud computing allows diverse deployment and service model alternatives, suitable to diverse requirements, such as security, latency, computational performance, availability and cost. Therefore, numerous companies distribute thousands of clouds worldwide, creating an equitable market through competition, where players create unique features to differentiate from competitors. Consequently, in the consumer side, picking a vendor tipically translates into vendor lock-in, a situation where the applications heavily depend on the vendor’s approach of exposing features, making it difficult to switch between vendors whenever convenient or to support complex scenarios across multiple distributed heterogeneous clouds, such as federation. An immediate work-around for users is to pick cloud solutions that implement standards or post-facto open source platforms, such as OpenStack, which are assumed to provide native interoperability between installations. In the industry, however, OpenStack proves that the lack of interoperability is a real concern even between its deployments, due the high flexibility and complexity of supported use cases. Therefore, this investigation documents intra-platform interoperability, as in OpenStack, presenting in detail the Python client library created by the community to abstract deployment differences, counting with numerous and significant contributions from the author. Afterwards, an extensive validation of that library is performed across one testing and five production clouds from different vendors worldwide, because despite the fact the library is extensively used by the community, it had never been formally validated. The validation unveiled bugs, functionality and documentation gaps. Since the OpenStack intra-platform interoperability had never been documented in the literature, a systematic literature review followed, allowing a deep comparison of the state of the art of vendor lock-in taxonomy and approaches in opposition to that library, presenting its advantages, disadvantages and recommendations for users. Lastly, the suggestions for future work include support for multiple programming languages and the adoption of the client library as a standard for inter-platform interoperability.
|
|
27
|
-
ALLAN VILAR DE CARVALHO
-
The Problem of the Traveling Salesman with Multiple Passengers and Quota
-
Advisor : MARCO CESAR GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
MATHEUS DA SILVA MENEZES
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
Data: Dec 14, 2018
-
-
Show Abstract
-
This scientific work presents the Traveling Salesman Problem with Multiple Passengers and Quota, variant of the Traveling Salesman Problem with Quota, which is to generate a route for the salesman the driver of the vehicle, which can share the car seats with passengers who request rides in the localities of your route. Every passenger on board is obliged to participate in the apportionment of the costs of the sections of route of the salesman, who is in the vehicle. A mathematical model, an instance bank and a set of resolution methods composed of an exact one, an ad hoc heuristic and seven metaheuristics are proposed for the problem. The results of the exact method for the instances with 10 and 20 localities are reported, and quantitative and qualitative analyzes of computational experiments comparing methods of resolution between them are presented.
|
|
|
Thesis |
|
1
|
-
SAMUEL LINCOLN MAGALHÃES BARROCAS
-
A Strategy to verify the code generation from Circus to Java
-
Advisor : MARCEL VINICIUS MEDEIROS OLIVEIRA
-
COMMITTEE MEMBERS :
-
MARCEL VINICIUS MEDEIROS OLIVEIRA
-
MARTIN ALEJANDRO MUSICANTE
-
UMBERTO SOUZA DA COSTA
-
ALEXANDRE CABRAL MOTA
-
BRUNO EMERSON GURGEL GOMES
-
Data: Feb 22, 2018
-
-
Show Abstract
-
The use of Automatic Code Generators for Formal Methods not only minimizes efforts on the implementation of Software Systems, as also reduces the chance of existing errors on the execution of such Systems. These tools, however, can themselves have faults on their source codes that causes errors on the generation of Software Systems, and thus verification of such tools is encouraged. This PhD thesis aims at creating and developing a strategy to verify JCircus, an automatic code generator from a large subset of Circus to Java. The interest in Circus comes from the fact that it allows the specification of concurrent and state-rich aspects of a System in a straightforward manner. The strategy of verification consists on the following steps: (1) extension of the existing operational semantics to Circus and proof that it is sound with respect to the existing denotational semantics of circus in the Unifying Theories of Programming (UTP), a framework that allows proof and unification of different theories; (2) development and implementation of a strategy that refinement-checks the generated code by JCircus, through a toolchain that encompasses a Labelled Predicate Transition System (LPTS) Generator for Circus and a Model Generator that inputs this LPTS and generates an Oracle that uses the Java Pathfinder code model-checker that refinement-checks the generated code by JCircus. Combined with coverage-based testing techniques, we envisage improving the reliability of the Code Generation from Circus to Java.
|
|
2
|
-
ROMERITO CAMPOS DE ANDRADE
-
Multicasting Routing in Multisession: Models and Algorithms.
-
Advisor : MARCO CESAR GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
MATHEUS DA SILVA MENEZES
-
PAULO HENRIQUE ASCONAVIETA DA SILVA
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
Data: May 14, 2018
-
-
Show Abstract
-
Multicast Technology has been studied over the last two decades and It has shown to be a good approach to save network resources. Many approaches have been considered to solve the multicast routing problem considering only one session and one source to attending session‘s demand, as well, multiple sessions with more than one source per session. In this thesis, the multicast routing problem is explored taking in consideration the modelsand the algorithms designed to solve it when where multiple sessions and sources. Two new models are proposed with different focuses. First, a mono-objective model optimizing residual capacity, Z, of the network subject to a budget is designed and the objective is to maximize Z. Second, a multi-objective model is designed with three objective functions: cost, Z and hops counting. Both models consider multisession scenario with one source per session. Besides, a third model is examined. This model was designed to optimize Z in a scenario with multiple sessions with support to more than one source per session. An experimental analysis was realized over the models considered. For each model, a set of algorithms were designed. First, an Ant Colony Optimization, a Genetic algorithm, a GRASP and an ILS algorithm were designed to solve the mono-objective model – optimizing Z subject to a budget. Second, a set of algorithm were designed to solve the multi-objective model. The classical approaches were used: NSGA2, ssNSGA2, SMS-EMOA, GDE3 and MOEA/D. In addition, a transgenetic algorithm was designed to solve the problem and it was compared against the classical approaches. This algorithm considers the use of subpopulations during the evolution. Each subpopulation is based on a solution construction operator guided by one of the objective functions. Some solutions are considered as elite solutions and they are considered to be improved by a transposon operator. Eight versions of the transgenetic algorithm were evaluated. Third, an algorithm was designed to solve the problem with multiple sessions and multiple sources per sessions. This algorithm is based on Voronoi Diagrams and it is called MMVD. The algorithm designed were evaluated on large experimental analysis. The sample generated by each algorithm on the instances were evaluated based on non-parametric statistical tests. The analysis performed indicates that ILS and Genetic algorithm have outperformed the Ant Colony Optimization and GRASP. The comparison between ILS and Genetic has shown that ILS has better processing time performance. In the multi-objective scenario, the version of Transgenetic called cross0 has shown to be statistically better than the other algorithms in most of the instances based on the hypervolume and addictive/multiplicative epsilon quality indicators. Finally, the MMVD algorithm has shown to be better than the algorithm from literature based on the experimental analysis performed for the model with multiple session and multiple sources per session.
|
|
3
|
-
ANTONIO DIEGO SILVA FARIAS
-
Generalized OWA functions
-
Advisor : REGIVAN HUGO NUNES SANTIAGO
-
COMMITTEE MEMBERS :
-
BENJAMIN RENE CALLEJAS BEDREGAL
-
EDUARDO SILVA PALMEIRA
-
REGIVAN HUGO NUNES SANTIAGO
-
RONEI MARCOS DE MORAES
-
SANDRA APARECIDA SANDRI
-
Data: Jun 29, 2018
-
-
Show Abstract
-
In the literature it is quite common to find problems that need efficient mechanisms in accomplishing the task of combining entries of the same nature in a value of the same type as the inputs. The aggregation functions are quite efficient in the accomplishment of this work, being able to be used, for example, to model the connectives of the fuzzy logic and also in problems of decision making. An important family of aggregations, belonging to the middle class of functions, was introduced by Yager in 1988, who called them ordered weighted averaging functions (OWA). These functions are a kind of weighted average, whose weights are not associated with the particular inputs, but their respective magnitudes, that is, the importance of an input is determined by their value. More recently, it has been found that non-aggregate class functions may also be able to combine inputs, such as pre- aggregations and mixture functions, which may not satisfy the mandatory monotonicity condition for aggregation functions. Thus, the objective of this work is to present a detailed study on aggregations and preaggregations, in order to provide a good theoretical basis in an area that has a wide possibility of applications. We present a detailed study of generalized mixing functions - GM, which extend the Yager OWA functions, and propose some ways to generalize the GM functions: limited generalized mixing functions and dynamic ordered weighted averaging functions.
|
|
4
|
-
EDMILSON BARBALHO CAMPOS NETO
-
Melhorando o Algoritmo SZZ para Lidar com Mudanças Semanticamente Equivalentes
-
Advisor : UIRA KULESZA
-
COMMITTEE MEMBERS :
-
DANIEL ALENCAR DA COSTA
-
EDUARDO HENRIQUE DA SILVA ARANHA
-
INGRID OLIVEIRA DE NUNES
-
MARCELO DE ALMEIDA MAIA
-
ROBERTA DE SOUZA COELHO
-
UIRA KULESZA
-
Data: Jul 20, 2018
-
-
Show Abstract
-
O algoritmo SZZ foi inicialmente proposto Sliwerski, Zimmermann e Zeller (origem da abreviação SZZ) para identificar as mudanças que introduzem erro no código. Contudo, embora bem aceito pela comunidade acadêmica, muitos pesquisadores têm reportado, ao longo dos anos, limitações associadas ao algoritmo SZZ. Por outro lado, não existe nenhum trabalho que tenha pesquisado profundamente como o SZZ é usado, estendido ou avaliado pela comunidade de engenharia de software. Além disso, poucos trabalhos têm proposto melhorias ao algoritmo SZZ. Nesse contexto, esta tese tem como objetivo revelar as existentes limitações documentadas na literatura sobre o algoritmo SZZ para melhorar o seu estado da arte, propondo soluções para algumas dessas limitações. Primeiramente, nós realizamos um mapeamento sistemático para identificar qual o estado da arte do algoritmo SZZ e explorar como ele tem sido utilizado, suas limitações, melhorias propostas e avaliações. Nós adotamos uma técnica de pesquisa existente conhecida como “snowballing” (em português, bola de neve) para conduzir estudo sistemáticos na literatura. Assim, nós partimos de dois renomados artigos e lemos todas as suas 589 citações e referências, resultando em 190 artigos a serem analisados. Nossos resultados desse estudo mostram que a maioria dos artigos usam o SZZ como base de estudos empíricos (83%), enquanto apenas poucos artigos realmente propõem melhorias diretas ao SZZ (3%) ou o avaliam (7%). Nós também observamos que o SZZ possui muitas limitações não consertadas, tais como o viés relacionado a mudanças semanticamente equivalentes, por exemplo, refatorações, que não foram endereçadas por nenhuma implementação anterior do SZZ. Posteriormente, nós conduzimos um estudo empírico para investigar a relação entre refatorações e os resultados do SZZ. Nós utilizamos para isso o RefDiff, a ferramenta de detecção de refatoração com a maior precisão reportada na literatura. Nós executamos o RefDiff tanto nas mudanças analisadas pelo SZZ como responsáveis pelo conserto dos erros (do inglês, “issue-fix changes”) como nas mudanças identificadas pelo algoritmo como que induziram ao conserto (do inglês, “fix-inducing changes”). Os resultados desse estudo indicam uma taxa de refatoração de 6,5% nas fix-inducing changes e 20% nas issue-fix changes. Além disso, nós identificamos que 39% das fix-inducing changes derivam de issue-fix changes com refatorações, logo tais mudanças não deveriam nem ter sido analisadas pelo SZZ. Esses resultados sugerem que refatorações realmente podem impactar os resultados do SZZ. Por fim, nós pretendemos evoluir este segundo estudo expandindo os tipos de refatorações identificadas, incorporando outras ferramentas de detecção de refatoração ao nosso algoritmo. Além disso, nós planejamos executar um terceiro estudo para avaliar nossa implementação melhorada do SZZ para lidar com mudanças semanticamente equivalente usando um framework de avaliação em um mesmo conjunto de dados anteriormente utilizado na literatura. Nós esperamos que os resultados dessa tese possam contribuir para a maturação do SZZ e, consequentemente, poder aproximá-lo de uma maior aceitação do algoritmo SZZ na prática.
|
|
5
|
-
IGOR ROSBERG DE MEDEIROS SILVA
-
BO-MAHM: A Multi-agent Architecture for Hybridization of Metaheuristics for Bi-objective Optimization
-
Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
GIVANALDO ROCHA DE SOUZA
-
MARCO CESAR GOLDBARG
-
MYRIAM REGATTIERI DE BIASE DA SILVA DELGADO
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
Data: Aug 3, 2018
-
-
Show Abstract
-
Several researches have pointed the hybridization of metaheuristics as an eective way to deal with combinatorial optimization problems. Hybridization allows the combination of dierent techniques, exploiting the strengths and compensating the weakness of each of them. MAHM is a promising adaptive framework for hybridization of metaheuristics, originally designed for single objective problems. This framework is based on the concepts of Multiagent Systems and Particle Swarm Optimization. In this study we propose an extension of MAHM to the bi-objective scenario. The proposed framework is called BOMAHM. To adapt MAHM to the bi-objective context, we redene some concepts such as particle position and velocity. In this study the proposed framework is applied to the biobjective Symmetric Travelling Salesman Problem. Four methods are hybridized: PAES, GRASP, NSGA2 and Anytime-PLS. Experiments with 11 bi-objective instances were performed and the results show that BO-MAHM is able to provide better non-dominated sets in comparison to the ones obtained by algorithms existing in literature as well as hybridized versions of those algorithms proposed in this work.
|
|
6
|
-
DENIS FELIPE
-
MOSCA/D: Multi-objective Scientific Algorithms Based on Decomposition
-
Advisor : ELIZABETH FERREIRA GOUVEA GOLDBARG
-
COMMITTEE MEMBERS :
-
ELIZABETH FERREIRA GOUVEA GOLDBARG
-
MARCO CESAR GOLDBARG
-
SILVIA MARIA DINIZ MONTEIRO MAIA
-
MATHEUS DA SILVA MENEZES
-
MYRIAM REGATTIERI DE BIASE DA SILVA DELGADO
-
Data: Aug 17, 2018
-
-
Show Abstract
-
This work presents a multi-objective version of the Scientific Algorithms based on decomposition (MOSCA/D). Such approach is a new metaheuristic inspired by the processes of scientific research to solve multi-objective optimization problems. MOSCA/D uses the concept of theme to direct the computational effort of the search to promising regions of the objective space, fixing different decision variables in each iteration. A probabilistic model based on the TF-IDF statistic assists the choice of such variables. Computational experiments applied MOSCA/D to 16 instances of the multi-objective multidimensional knapsack problem (MOMKP) with up to 8 objectives. The results were compared to NSGA-II, SPEA2, MOEA/D, MEMOTS, 2PPLS, MOFPA and HMOBEDA, covering three classical multi-objective algorithms, two state of the art algorithms for the problem and two most recently published algorithms for the problem, respectively. Statistical tests showed evidence that MOSCA/D can compete with other consolidated approaches from literature and can now be considered the new state of the art algorithm for the MOMKP in instances with more than two objectives, considering the hypervolume and epsilon quality indicators.
|
|
7
|
-
JOSÉ AUGUSTO SARAIVA LUSTOSA FILHO
-
Exploring diversity and similarity as criteria in ensemble systems based on dynamic selection
-
Advisor : ANNE MAGALY DE PAULA CANUTO
-
COMMITTEE MEMBERS :
-
ANNE MAGALY DE PAULA CANUTO
-
ARAKEN DE MEDEIROS SANTOS
-
BRUNO MOTTA DE CARVALHO
-
DANIEL SABINO AMORIM DE ARAUJO
-
GEORGE DARMITON DA CUNHA CAVALCANTI
-
Data: Aug 24, 2018
-
-
Show Abstract
-
Pattern classification techniques can be considered the most important activitie in pattern recognition area where aims assing a unknown sample test to a class. Generally individual classifiers haven’t good recognition rates compared to multiple classifiers. Thus ensemble of classifiers can be used to increase the accuracy of classification systems. Ensemble systems provide good recognition rates when the classifiers members of the ensemble system have uncorrelated errors in different sub-spaces of the problem; This characteristic is measured by diversity measures. In this context, the present thesis explores ensemble systems using dynamic selection. Ao contrário de comitês que utilizam seleção estática, em comitês de classificadores utilizando seleção dinâmica, para cada padrão de teste estima-se o nível de competência de cada classificador de um conjunto inicial. Apenas os classificadores mais competentes são selecionados para classificar o padrão de teste. O presente trabalho objetiva explorar, avaliar e propor métodos para seleção dinâmica de classificadores baseando-se em medidas de diversidade. Unlike emseble sysetm using static selection, in ensembles using dynamic selection for each test pattern is estimated the competence level for the initial set of classifiers. Only the most relevant classifiers are selected to classify the test pattern. This paper aims to explorer, evaluate and propose methods for ensemble systems based on diversity measures. To achieve this goal, several ensemble systems in the literature using dynamic selection are exploited, as well as hybrid versions of them are proposed in order to quantify, by experiments, the influence of diversity measure among classifiers members in ensemble systems. Therefore the contribution of this work is empirically elucidate the advantages and disadvantages of using diversity measures in dynamic selection of classifiers.
|
|
8
|
-
RONILDO PINHEIRO DE ARAUJO MOURA
-
Hierarchical Clustering Ensemble preserving the T-transitivity
-
Advisor : BENJAMIN RENE CALLEJAS BEDREGAL
-
COMMITTEE MEMBERS :
-
ANNE MAGALY DE PAULA CANUTO
-
BENJAMIN RENE CALLEJAS BEDREGAL
-
FLAVIO BEZERRA COSTA
-
ARAKEN DE MEDEIROS SANTOS
-
EDUARDO SILVA PALMEIRA
-
Data: Oct 5, 2018
-
-
Show Abstract
-
The main idea of ensemble learning is improved machine learning results by combining several models. Initially applied to supervised learning, this approach usually produces better results in comparison with single methods. Similarly, unsupervised ensemble learning, or consensus clustering, create individual clustering that is more robust in comparison to unique methods. The most common methods are designed for flat clustering, and show superior in quality to clustering unique methods. Thus, it can be expected that consensus of hierarchical clustering could also lead to higher quality in creating hierarchical clustering. Recent studies not been taken to consider particularities inherent in the different methods of hierarchical grouping during the consensus process. This work investigates the impact of the ensemble consistency in the final consensual results considering the differents hierarchical methods uses in the ensemble. We propose a process that retains intermediate transitivity in dendrograms. In this algorithm, firstly, the dendrograms describing the base clustering are converted to an ultrametric matrix. Then, after one fuzzification process, the consensus functions based on aggregation operator with preserve transitivity property is applied to the matrices and form the final consensus matrix. The final clustering will be a dendrogram obtained from this aggregate matrix. Analyzing the results of the experiments performed on the known datasets and also visualizing algorithm’s process on the visual (two-dimensional) datasets shows this approach can significantly improve the accuracy performance once retaining the consistency property.
|
|
9
|
-
EDUARDO ALEXANDRE FERREIRA SILVA
-
Mission-driven Software-intensive System-of-Systems Architecture Design
-
Advisor : THAIS VASCONCELOS BATISTA
-
COMMITTEE MEMBERS :
-
ABDELHAK-DJAMEL SERIAI
-
ELISA YUMI NAKAGAWA
-
FLAVIO OQUENDO
-
KHALIL DRIRA
-
MARCEL VINICIUS MEDEIROS OLIVEIRA
-
THAIS VASCONCELOS BATISTA
-
Data: Dec 17, 2018
-
-
Show Abstract
-
Missions represent a key concern in the development of systems-of-systems (SoS) since they can be related to both capabilities of constituent systems and interactions among these systems that contribute to the accomplishment of global goals of the SoS. For this reason, mission models are promising starting points to the SoS development process and they can be used as a basis for the specification, validation and verification of SoS architectural models. Specifying, validating and verifying architectural models for SoS are difficult tasks compared to usual systems, the inner complexity of this kind of systems relies especially on the emergent behaviors, i.e. features that emerge from the cooperation between the constituent parts of the SoS that often cannot be accurately predicted.
This work is concerned with such a synergetic relationship between mission and architectural models, giving a special attention to the emergent behavior that arise for a given configuration of the SoS. We propose a development process for architectural modeling of SoS, centered in the so-called mission models. In this proposal, the mission model is used to both derive, validate/verify architectures of SoS. In a first moment we dene a formal mission model, then we generate the structural definition for the architecture using model transformation. Later, as the architect specify the behavioral aspects of the system, using this architecture, we can generate concrete architectures that will be verified and validated using simulation-based approaches. The verification uses statistical model checking to verify whether the properties are satisfied, within a degree of confidence. The validation is aimed to emergent behaviors and missions, but can be extended to any aspect of the mission model. The simulation also allows the identification of unpredicted emergent behaviors. A toolset that integrates existing tools and implements the whole process is also presented.
|
|