2020 |
Tosi, Mauro Dalle Lucca
Constructing Knowledge Graphs from Textual Documents for Scientific Literature Analysis (mastersthesis)
mastersthesis,
2020.
(
Abstract |
Links |
BibTeX |
Tags:
Graph (Computer system),
Complex networks,
Centrality (Graph Theory),
Semantic computing,
Scientific knowledge
)
@mastersthesis{tosi2020,
abstract = {The amount of publications a researcher must absorb has been increasing over the last years. Consequently, among so many options, it is hard for them to identify interesting documents to read related to their studies. Researchers usually search for review articles to understand how a scientific field is organized and to study its state of the art. This option can be unavailable or outdated depending on the studied area. Usually, they have to do such laborious task of background research manually. Recent researches have developed mechanisms to assist researchers in understanding the structure of scientific fields. However, those mechanisms focus on recommending relevant articles to researchers or supporting them in understanding how a scientific field is organized considering documents that belong to it. These methods limit the field understanding, not allowing researchers to study the underlying concepts and relations that compose a scientific field and its sub-areas. This Ms.c. thesis proposes a framework to structure, analyze, and track the evolution of a scientific field at a concept level. Given a set of textual documents as research papers, it first structures a scientific field as a knowledge graph using its detected concepts as vertices. Then, it automatically identifies the field's main sub-areas, extracts their keyphrases, and studies their relations. Our framework enables to represent the scientific field in distinct time-periods. It allows to compare its representations and identify how the field's areas changed over time. We evaluate each step of our framework representing and analyzing scientific data from distinct fields of knowledge in case studies. Our findings indicate the success in detecting the sub-areas based on the generated graph from natural language documents. We observe similar outcomes in the different case studies, indicating that our approach is applicable to distinct domains. This research also contributes with a web-based software tool that allows researchers to use the proposed framework graphically. By using our application, researchers can have an overview analysis of how a scientific field is structured and how it evolved.},
author = {Tosi, Mauro Dalle Lucca},
title = {Constructing Knowledge Graphs from Textual Documents for Scientific Literature Analysis},
school = {University of Campinas - Institute of Computing},
keyword = {Graph;Complex networks;Centrality;Semantic computing;Scientific knowledge},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2020/DissertacaoMauro.pdf}
year = {2020},
date = {2020-03-09}
}
The amount of publications a researcher must absorb has been increasing over the last years. Consequently, among so many options, it is hard for them to identify interesting documents to read related to their studies. Researchers usually search for review articles to understand how a scientific field is organized and to study its state of the art. This option can be unavailable or outdated depending on the studied area. Usually, they have to do such laborious task of background research manually. Recent researches have developed mechanisms to assist researchers in understanding the structure of scientific fields. However, those mechanisms focus on recommending relevant articles to researchers or supporting them in understanding how a scientific field is organized considering documents that belong to it. These methods limit the field understanding, not allowing researchers to study the underlying concepts and relations that compose a scientific field and its sub-areas. This Ms.c. thesis proposes a framework to structure, analyze, and track the evolution of a scientific field at a concept level. Given a set of textual documents as research papers, it first structures a scientific field as a knowledge graph using its detected concepts as vertices. Then, it automatically identifies the field's main sub-areas, extracts their keyphrases, and studies their relations. Our framework enables to represent the scientific field in distinct time-periods. It allows to compare its representations and identify how the field's areas changed over time. We evaluate each step of our framework representing and analyzing scientific data from distinct fields of knowledge in case studies. Our findings indicate the success in detecting the sub-areas based on the generated graph from natural language documents. We observe similar outcomes in the different case studies, indicating that our approach is applicable to distinct domains. This research also contributes with a web-based software tool that allows researchers to use the proposed framework graphically. By using our application, researchers can have an overview analysis of how a scientific field is structured and how it evolved.
|
2019 |
Bonacin, Rodrigo;
Dos Reis, Julio Cesar;
Baranauskas, Maria Cecília Calani
Universal Participatory Design: Achievements and Challenges (journal)
Journal on Interactive Systems,
SBC,
journal,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Universal Access,
Participatory Design,
Accessibility,
Democracy in Design
)
@article{bonacin2019u,
author = {Rodrigo Bonacin and Julio Cesar Dos Reis and Maria Cecília Baranauskas},
title = {Universal Participatory Design: Achievements and Challenges},
journal = {Journal on Interactive Systems},
volume = {10},
number = {1},
year = {2019},
keywords = {},
abstract = {According to the principles of participatory design, a genuine democratic process requires effective participation of all affected people in the design process; this must include affected disabled users. However, user participation entails complex problems, which are aggravated by conditions of illiteracy and/or aging. This article presents the concept of Universal Participatory Design, a design philosophy and practice that aims to be inclusive during the design process, and which has a positive result for all. We first conducted a review of the literature to understand the limits of the relationships between participatory design and universal design. This paper then addresses some of the challenges to achieve Universal Participatory Design (UPD) by juxtaposing deficits observed in the literature with issues we experienced during two research projects. We discuss the key components of Participatory Design and its relationship to UPD, and establish a research agenda that aims to conceptualize and investigate participatory design with universal access. Our findings indicate the need for flexible design methods, adaptable artifacts, and positive designers’ attitudes when encountering unexpected situations.},
issn = {2236-3297},
url = {https://sol.sbc.org.br/journals/index.php/jis/article/view/714}
}
According to the principles of participatory design, a genuine democratic process requires effective participation of all affected people in the design process; this must include affected disabled users. However, user participation entails complex problems, which are aggravated by conditions of illiteracy and/or aging. This article presents the concept of Universal Participatory Design, a design philosophy and practice that aims to be inclusive during the design process, and which has a positive result for all. We first conducted a review of the literature to understand the limits of the relationships between participatory design and universal design. This paper then addresses some of the challenges to achieve Universal Participatory Design (UPD) by juxtaposing deficits observed in the literature with issues we experienced during two research projects. We discuss the key components of Participatory Design and its relationship to UPD, and establish a research agenda that aims to conceptualize and investigate participatory design with universal access. Our findings indicate the need for flexible design methods, adaptable artifacts, and positive designers’ attitudes when encountering unexpected situations.
|
Tosi, Mauro Dalle Lucca;
Dos Reis, Julio Cesar
C-Rank: A Concept Linking Approach to Unsupervised Keyphrase Extraction (conference)
Research Conference on Metadata and Semantics Research,
Springer,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Keyphrase extraction,
Complex networks,
Semantic annotation
)
@inproceedings{tosi2019c,
title={C-Rank: A Concept Linking Approach to Unsupervised Keyphrase Extraction},
author={Tosi, Mauro Dalle Lucca and dos Reis, Julio Cesar},
booktitle={Research Conference on Metadata and Semantics Research},
pages={236--247},
year={2019},
organization={Springer}
}
Keyphrase extraction is the task of identifying a set of phrases that best represent a natural language document. It is a fundamental and challenging task that assists publishers to index and recommend relevant documents to readers. In this article, we introduce C-Rank, a novel unsupervised approach to automatically extract keyphrases from single documents by using concept linking. Our method explores Babelfy to identify candidate keyphrases, which are weighted based on heuristics and their centrality inside a co-occurrence graph where keyphrases appear as vertices. It improves the results obtained by graph-based techniques without training nor background data inserted by users. Evaluations are performed on SemEval and INSPEC datasets, producing competitive results with state-of-the-art tools. Furthermore, C-Rank generates intermediate structures with semantically annotated data that can be used to analyze larger textual compendiums, which might improve domain understatement and enrich textual representation methods.
|
Destro, Juliana Medeiros;
dos Reis, Julio Cesar;
Torres, Ricardo da S;
Ricarte, Ivan
Evolution-based refinement of cross-language ontology alignments (simpósio)
Anais Principais do XXXIV Simpósio Brasileiro de Banco de Dados,
SBC,
simpósio,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Ontology Alignment,
information interconnectivity,
ontology evolution,
refinement actions,
semantic relations
)
"
@inproceedings{destro2019e,
author = {Juliana Destro and Julio César dos Reis and Ricardo Torres and Ivan Ricarte},
title = {Evolution-based Refinement of Cross-language Ontology Alignments},
booktitle = {Anais Principais do XXXIV Simpósio Brasileiro de Banco de Dados},
location = {Fortaleza},
year = {2019},
keywords = {},
issn = {0000-0000},
pages = {61--72},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil},
doi = {10.5753/sbbd.2019.8808},
url = {https://sol.sbc.org.br/index.php/sbbd/article/view/8808}
}
Ontology alignment plays a key role for information interconnectivity between computational systems relying on ontologies described in different natural languages. Existing approaches for ontology matching usually provide equivalent type of relation in the generated mappings. In this article, we propose a refinement technique to enable the update of the semantic type of the mapping such as “is-a”, “part-of”, etc. Our approach relies on information from the ontology evolution to apply refinement actions. We formalize the refinement actions and procedures, as well as apply the proposal in application scenarios.
|
Regino, André Gomes;
Matsoui, Julio Kiyoshi Rodrigues;
Dos Reis, Julio Cesar;
Bonacin, Rodrigo;
Morshed, Ahsan;
Sellis, Timos
Understanding Link Changes in LOD via the Evolution of Life Science Datasets (conference)
Proceedings of the Workshop on Semantic Web Solutions for Large-Scale Biomedical Data Analytics co-located with 18th International Semantic Web Conference {(ISWC} 2019), Auckland, New Zealand, October 27th, 2019,
CEUR-WS.org,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
LOD,
Web of Data evolution,
Link evolution,
Change Operations,
Link changes,
Link Repair,
RDF life science datasets
)
@inproceedings{regino2019,
author = {Andr{\'{e}} Gomes Regino and
Julio Kiyoshi Rodrigues Matsoui and
J{\'{u}}lio C{\'{e}}sar dos Reis and
Rodrigo Bonacin and
Ahsan Morshed and
Timos Sellis},
editor = {Ali Hasnain and
V{\'{\i}}t Nov{\'{a}}cek and
Michel Dumontier and
Dietrich Rebholz{-}Schuhmann},
title = {Understanding Link Changes in {LOD} via the Evolution of Life Science
Datasets},
booktitle = {Proceedings of the Workshop on Semantic Web Solutions for Large-Scale
Biomedical Data Analytics co-located with 18th International Semantic
Web Conference {(ISWC} 2019), Auckland, New Zealand, October 27th,
2019},
series = {{CEUR} Workshop Proceedings},
volume = {2477},
pages = {40--54},
publisher = {CEUR-WS.org},
year = {2019},
urlPaper = {http://ceur-ws.org/Vol-2477/paper\_4.pdf},
urlWeb = {http://ceur-ws.org/Vol-2477/},
biburl = {https://dblp.org/rec/conf/semweb/ReginoMRBMS19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
}
RDF data has been extensively deployed for the interlinking of health-related data in a structured way. The definition of link statements between distinct resources plays a key role to interconnect several life science repositories. However, RDF assertions are subject to change, which can affect existing links. In this article, we conduct extensive experiments to understand the evolution of links in the Linked Open Data (LOD). The objective is to empirically associate changes in the semantic definition of data resources with modifications observed in predefined links. We consider two versions of the Agrovoc RDF repository to calculate different types of change operations and associate them to link change actions. Obtained results indicate the existence of the cases investigated in this study. We demonstrate that RDF changes impact the evolution of established links.
|
Yamamoto, V.E.;
dos Reis, J.C.
Updating ontology alignments in life sciences based on new concepts and their context (conference)
Workshop on Semantic Web Solutions for Large-Scale Biomedical Data Analytics - 18th International Semantic Web Conference (ISWC 2019),
CEUR-WS,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
ontology alignment,
ontology evolution,
mapping refinement,
concept addition,
biomedical vocabulary
)
@CONFERENCE{Yamamoto2019u,
author={Yamamoto, V.E. and dos Reis, J.C.},
title={Updating ontology alignments in life sciences based on new concepts and their context},
journal={CEUR Workshop Proceedings},
year={2019},
volume={2477},
pages={16-30},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074556967&partnerID=40&md5=c3faad9a24386c21b55200bb5356a84d},
abstract={Ontologies and their associated mappings in life sciences play a central role in several semantic-enabled tasks. However, the continuous evolution of these ontologies requires updating existing concept alignments. Whereas mapping maintenance techniques have mostly handled revision and removal type of ontology changes, the addition of concepts demands further studies. This article proposes a technique to refine a set of established mappings based on the evolution of biomedical ontologies. We investigate ways of suggesting correspondences with the new version of the ontology without applying a matching operation to the whole set of ontology entities. Obtained results explore the neighbourhood of concepts in the alignment process to update mapping sets. Our experimental evaluation with several versions of aligned biomedical ontologies shows the effectiveness in considering the context of new concepts. Copyright ©2019 for this paper by its authors.},
author_keywords={Biomedical vocabulary; Concept addition; Mapping refinement; Ontology alignment; Ontology evolution},
publisher={CEUR-WS},
document_type={Conference Paper},
source={Scopus}
}
Ontologies and their associated mappings in life sciences play a central role in several semantic-enabled tasks. However, the continuous evolution of these ontologies requires updating existing concept alignments. Whereas mapping maintenance techniques have mostly handled revision and removal type of ontology changes, the addition of concepts demands further studies. This article proposes a technique to refine a set of established mappings based on the evolution of biomedical ontologies. We investigate ways of suggesting correspondences with the new version of the ontology without applying a matching operation to the whole set of ontology entities. Obtained results explore the neighbourhood of concepts in the alignment process to update mapping sets. Our experimental evaluation with several versions of aligned biomedical ontologies shows the effectiveness in considering the context of new concepts.
|
do Espírito Santo, Jacqueline M.;
de Paula, Erich Vinicius;
Medeiros, Claudia Bauzer
Exploring Semantics in Clinical Data Interoperability (conference)
Advances in Conceptual Modeling,
Springer International Publishing,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
interoperability,
Medical Knowledge Organizations Systems,
semantic query,
query expansion
)
@inproceedings{Santo2019,
title={Exploring Semantics in Clinical Data Interoperability},
author={Jacqueline do Espírito Santo and Erich Vinicius de Paula and Claudia Bauzer Medeiros},
booktitle={Advances in Conceptual Modeling},
pages={201-210},
year={2019},
organization={Springer International Publishing}
}
|
Rossanez, A.;
dos Reis, J.C.
Generating knowledge graphs from scientific literature of degenerative diseases (conference)
International Workshop on Semantics-Powered Data Mining and Analytics (SEPDA 2019) - 18th International Semantic Web Conference (ISWC 2019),
CEUR-WS,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Knowledge Graphs,
RDF triples,
Ontologies,
Information Extraction
)
@CONFERENCE{Rossanez2019g,
author={Rossanez, A. and dos Reis, J.C.},
title={Generating knowledge graphs from scientific literature of degenerative diseases},
journal={CEUR Workshop Proceedings},
year={2019},
volume={2427},
pages={12-23},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85071743429&partnerID=40&md5=9a5a3c77214912a8d4b7132f6a2ab283},
abstract={Degenerative diseases, such as the Alzheimer’s Disease, can be very serious and life-threatening. As the scientific community strives to fully understand their exact root causes and advance their research on the domain, a massive amount of knowledge is generated. To represent and link all this knowledge, we propose the generation of knowledge graphs from the scientific literature. We aim to provide researchers the ability to relate their new discoveries with the current knowledge and possibly formulate new hypotheses to further advance the research. In this paper, we describe a method to extract information from scientific literature for generating a knowledge graph reusing existing domain ontologies. We demonstrate the effectiveness of our method by generating knowledge graphs from a set of abstracts of scientific papers on Alzheimer’s Disease. Copyright © 2019 for this paper by its authors.},
author_keywords={Information extraction; Knowledge graphs; Ontologies; RDF triples},
publisher={CEUR-WS},
document_type={Conference Paper},
source={Scopus}
}
Degenerative diseases, such as the Alzheimer’s Disease, can be very serious and life-threatening. As the scientific community strives to fully understand their exact root causes and advance their research on the domain, a massive amount of knowledge is generated. To represent and link all this knowledge, we propose the generation of knowledge graphs from the scientific literature. We aim to provide researchers the ability to relate their new discoveries with the current knowledge and possibly formulate new hypotheses to further advance the research. In this paper, we describe a method to extract information from scientific literature for generating a knowledge graph reusing existing domain ontologies. We demonstrate the effectiveness of our method by generating knowledge graphs from a set of abstracts of scientific papers on Alzheimer’s Disease.
|
Destro, J.M.;
dos Reis, J.C.;
da Silva Torres, R.;
Ricarte;
I."
Ontology changes-driven semantic refinement of cross-language biomedical ontology alignments (conference)
International Workshop on Semantic Web Solutions for Large-Scale Biomedical Data Analytics - 18th International Semantic Web Conference (ISWC 2019),
CEUR-WS,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
mapping refinement,
ontology evolution,
cross-language alignment
)
@CONFERENCE{Destro2019o,
author={Destro, J.M. and dos Reis, J.C. and da Silva Torres, R. and Ricarte, I.},
title={Ontology changes-driven semantic refinement of cross-language biomedical ontology alignments},
journal={CEUR Workshop Proceedings},
year={2019},
volume={2477},
pages={31-15},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074606707&partnerID=40&md5=f22a2f846062015df7a6df66ee247be6},
abstract={Biomedical computational systems benefits from the use of ontologies. However, interconnectivity between these systems is a challenge, specially when the ontologies supporting each system are described in different natural languages. Ontology alignment plays a key role in data exchange. Existing ontology matching approaches usually provide only equivalent type of relation in the generated mappings. In this article, we propose a refinement technique to enable the update of the semantic type of the mapping beyond equivalence. Our approach relies on information from the ontology evolution. Our evaluation considered LOINC releases in different languages. The results demonstrate the usefulness of ontology evolution changes to support the process of mapping refinement. Copyright ©2019 for this paper by its authors.},
author_keywords={Cross-language alignment; Mapping refinement; Ontology evolution},
publisher={CEUR-WS},
document_type={Conference Paper},
source={Scopus}
}
Biomedical computational systems benefits from the use of ontologies. However, interconnectivity between these systems is a challenge, specially when the ontologies supporting each system are described in different natural languages. Ontology alignment plays a key role in data exchange. Existing ontology matching approaches usually provide only equivalent type of relation in the generated mappings. In this article, we propose a refinement technique to enable the update of the semantic type of the mapping beyond equivalence. Our approach relies on information from the ontology evolution. Our evaluation considered LOINC releases in different languages. The results demonstrate the usefulness of ontology evolution changes to support the process of mapping refinement.
|
Destro, Juliana Medeiros;
Vargas, Javier A;
dos Reis, Julio Cesar;
Torres, Ricardo Da Silva
EVOCROS: Results for OAEI 2019 (conference)
The Fourteenth International Workshop on Ontology Matching - 18th International Semantic Web Conference ISWC-2019,
CEUR-WS,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
cross-lingual matching,
semantic matching,
background knowledge,
ranking aggregation
)
@inproceedings{destro2019evocros,
title={EVOCROS: Results for OAEI 2019},
author={Destro, Juliana Medeiros and Vargas, Javier A and dos Reis, Julio Cesar and Torres, Ricardo Da Silva},
year={2019},
organization={CEUR Workshop Proceedings}
}
This paper describes the updates in EVOCROS, a crosslingual ontology alignment system suited to create mappings between ontologies described in different natural language. Our tool combines syntactic and semantic similarity measures with information retrieval techniques. The semantic similarity is computed via NASARI vectors used together with BabelNet, which is a domain-neutral semantic network. In particular, we investigate the use of rank aggregation techniques in the cross-lingual ontology alignment task. The tool employs automatic translation to a pivot language to consider the similarity. EVOCROS was tested and obtained high quality alignment in the Multifarm dataset. We discuss the experimented configurations and the achieved results in OAEI 2019. This is our second participation in OAEI.
|
Santos, Andressa C. dos;
Muriana, Luã M.;
Pimenta, Josiane R. O. G.;
Silva, José V. da;
Moreira, Eliana A.;
Reis, Julio C. dos
Investigating Aspects of Affectibility for Universal Access in Socioenactive System Scenarios (conference)
Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems,
Association for Computing Machinery,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Affectibility,
Socioenactive,
Universal Access,
Emotion,
PAff
)
@inproceedings{AdosSantos2019,
author = {Santos, Andressa C. dos and Muriana; Lu\~{a} M.; Pimenta Josiane R. O. G.; Silva, Jos\'{e} V. da; Moreira, Eliana A.; Reis, Julio C. dos},
title = {Investigating Aspects of Affectibility for Universal Access in Socioenactive System Scenarios},
year = {2019},
isbn = {9781450369718},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3357155.3358475},
doi = {10.1145/3357155.3358475},
booktitle = {Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems},
articleno = {33},
numpages = {11},
keywords = {universal access; socioenactive; affectibility; emotion; PAff},
location = {Vit\'{o}ria, Esp\'{\i}rito Santo, Brazil},
}
The design process focused on universal access must be guided by a set of relevant recommendations to improve interaction design and evaluation. The interactions in socioenactive systems intensify the emphasis in social, corporal and affective aspects. This article develops an affective study in the context of socioenactive scenarios. Our objective is to analyse the Design Principles of Affectibility (PAff) towards universal access into socioenactive systems. The analysis was conducted in a workshop realized at a hospital where the participants involved kids who were under rehabilition regarding face and skull disorders. Relying on the analisys applying PAff, we generated a set of recomendations which might be useful to designers for promoting universal access in socioenactive systems.
|
L. Virginio;
J. C. dos Reis
Finding Relations Between Requirements for Healthcare Information Systems Use in Hospitals: A Study on EMRAM and JCI (conference)
2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI),
IEEE,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Electronic Medical Record Adoption Model,
Healthcare Information System,
Joint Commission International
)
@INPROCEEDINGS{virginio2019f,
author={L. {Virginio} and J. C. {dos Reis}},
booktitle={2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)},
title={Finding Relations Between Requirements for Healthcare Information Systems Use in Hospitals: A Study on EMRAM and JCI},
year={2019},
volume={},
number={},
pages={1-6},
abstract={EMRAM is a maturity model which goal is to measure the adoption and utilization of HIS functions in hospitals. However, maturity models regarding HIS in healthcare settings are not comprehensive and lack detail. Therefore, it is important for EMRAM to learn from other sources of HIS evaluation in healthcare organizations, such as JCI. In addition, it is important to understand how to adapt processes and implement technologies that can ensure compliance with the requirements established by both bodies. In this paper, we carry out an evaluation to identify relations between JCI and EMRAM requirements. We extracted EMRAM and JCI requirements and identified relations between them, with further validation by specialist. We identified 127 relations between JCI requirements and EMRAM and/or HIS. Six JCI requirements specifically related to IT are not currently required by EMRAM and could be used to promote the evolution of the maturity model. We also identified the JCI requirements that can be supported by HIS, which can be used by healthcare organizations to facilitate the management of JCI and EMRAM conformance.},
keywords={health care;hospitals;medical information systems;healthcare information systems;hospitals;maturity model;healthcare organizations;EMRAM requirements;JCI requirements;HIS evaluation;Healthcare Information System;Electronic Medical Record Adoption Model;Joint Commission International},
doi={10.1109/CISP-BMEI48845.2019.8965782},
ISSN={},
month={Oct}
}
EMRAM is a maturity model which goal is to measure the adoption and utilization of HIS functions in hospitals. However, maturity models regarding HIS in healthcare settings are not comprehensive and lack detail. Therefore, it is important for EMRAM to learn from other sources of HIS evaluation in healthcare organizations, such as JCI. In addition, it is important to understand how to adapt processes and implement technologies that can ensure compliance with the requirements established by both bodies. In this paper, we carry out an evaluation to identify relations between JCI and EMRAM requirements. We extracted EMRAM and JCI requirements and identified relations between them, with further validation by specialist. We identified 127 relations between JCI requirements and EMRAM and/or HIS. Six JCI requirements specifically related to IT are not currently required by EMRAM and could be used to promote the evolution of the maturity model. We also identified the JCI requirements that can be supported by HIS, which can be used by healthcare organizations to facilitate the management of JCI and EMRAM conformance.
|
Muriana, Luã Marcelo;
Tosi, Mauro Dalle Lucca;
dos Reis, Julio Cesar
Aprendendo via o Papel de Designer e de Stakeholder: Uma Estratégia Pedagógica para Ensino de IHC (simpósio)
Anais Estendidos do XVIII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais,
SBC,
simpósio,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Ensino Baseado em Projeto,
Papel de Designer,
Ensino de IHC,
Design Centrado no Usuário
)
@inproceedings{muriana2019a,
author = {Luã Marcelo Muriana and Mauro Dalle Lucca Tosi and Julio Cesar dos Reis},
title = {Aprendendo via o Papel de Designer e de Stakeholder: Uma Estratégia Pedagógica para Ensino de IHC},
booktitle = {Anais Estendidos do XVIII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais},
location = {Vitória},
year = {2019},
keywords = {},
issn = {2177-9384},
pages = {88--93},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil},
doi = {10.5753/ihc.2019.8406},
url = {https://sol.sbc.org.br/index.php/ihc_estendido/article/view/8406}
}
O ensino de IHC demanda práticas colaborativas em grupos que experienciam diferentes papéis no processo de design de software interativos. Neste artigo avaliamos o uso de uma estratégia pedagógica no ensino de IHC em que alunos desenvolvem tanto o papel de designers quanto de stakeholders em uma abordagem de aprendizagem baseada em projeto. Os alunos são divididos em grupos em que a princípio escolhem os temas dos projetos para trabalharem como designers. Entretanto, para simular uma experiência mais próxima do mundo real, em que a escolha de temas de projetos não é possível, os grupos são parificados e têm seus temas trocados. Assim, cada grupo é designer de um projeto com tema não familiar (não definido pelos mesmos) e desenvolve papel de “cliente” do tema por ele escolhido. Para avaliar a percepção de aprendizagem dos alunos em relação à estratégia de ensino elaborada, realizamos um questionário aplicado à turmas de Ciência da Computação e Engenharia da Computação. Com base em 123 respostas analisadas, 96% dos alunos afirmaram que o projeto os auxiliou positivamente no aprendizado de IHC.
|
Borges, Marcos Vinícius Macêdo;
dos Reis, Julio Cesar
Semantic-Enhanced Recommendation of Video Lectures (conference)
2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT),
IEEE,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Learning support,
Ontology,
Recommendation System,
Semantic Annotation
)
@INPROCEEDINGS{borges2019s,
author={M. V. {Macêdo Borges} and J. C. {dos Reis}},
booktitle={2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT)},
title={Semantic-Enhanced Recommendation of Video Lectures},
year={2019},
volume={2161-377X},
number={},
pages={42-46},
abstract={Learning support systems explore several audio-visual resources to consider individual needs and learning styles aiming to stimulate learning experiences. However, the large amount of educational content in different formats and the possibility of making them available in a fragmented way turns difficult the tasks of accessing these resources and understanding the concepts under study. Although literature has proposed approaches to explore explicit semantic representation through artifacts such as ontologies in learning support systems, this research line still requires further investigation efforts. In this paper, we propose a method for recommending educational content by exploring the use of semantic annotations over textual transcriptions from video lessons. Our investigation addresses the difficulties in extracting entities from natural language texts as subtitles of videos. We report on major challenges to achieve the representation of video transcriptions as semantic annotations for automatic recommendation of educational content.},
keywords={computer aided instruction;ontologies (artificial intelligence);text analysis;semantic-enhanced recommendation;video lectures;support systems;audio-visual resources;individual needs;learning styles;educational content;explicit semantic representation;research line;investigation efforts;semantic annotations;video lessons;video transcriptions;automatic recommendation;textual transcriptions;learning experiences;natural language texts;Ontologies;Semantics;Annotations;Task analysis;Computer science;Indexing;Ontology, Learning support, Recommendation System, Semantic Annotation},
doi={10.1109/ICALT.2019.00013},
ISSN={2161-377X},
month={July}
}
Learning support systems explore several audio-visual resources to consider individual needs and learning styles aiming to stimulate learning experiences. However, the large amount of educational content in different formats and the possibility of making them available in a fragmented way turns difficult the tasks of accessing these resources and understanding the concepts under study. Although literature has proposed approaches to explore explicit semantic representation through artifacts such as ontologies in learning support systems, this research line still requires further investigation efforts. In this paper, we propose a method for recommending educational content by exploring the use of semantic annotations over textual transcriptions from video lessons. Our investigation addresses the difficulties in extracting entities from natural language texts as subtitles of videos. We report on major challenges to achieve the representation of video transcriptions as semantic annotations for automatic recommendation of educational content.
|
Victorelli, Eliane Zambon;
dos Reis, Julio Cesar;
Santos, Antonio Alberto Souza;
Schiozer, Denis José
Participatory Evaluation of Human-Data Interaction Design Guidelines (conference)
Human-Computer Interaction - INTERACT 2019,
Springer,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Human-data interaction,
Design guidelines,
Design evaluation,
Participatory design,
Visual analytics,
Oil reservoirs
)
@InProceedings{victorelli2019p,
author="Victorelli, Eliane Zambon
and Reis, Julio Cesar dos
and Santos, Antonio Alberto Souza
and Schiozer, Denis Jos{\'e}",
editor="Lamas, David
and Loizides, Fernando
and Nacke, Lennart
and Petrie, Helen
and Winckler, Marco
and Zaphiris, Panayiotis",
title="Participatory Evaluation of Human-Data Interaction Design Guidelines",
booktitle="Human-Computer Interaction -- INTERACT 2019",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="475--494",
abstract="The design of visual analytics tools for facilitating human-data interaction (HDI) plays a key role to help people identifying useful knowledge from large masses of data. Designing data visualization based on guidelines is relevant. However, it is necessary to further promote the engagement of people in evaluation activities in the design process. Stakeholders need to comprehend the guidelines to help with the evaluation results and design decisions. In this paper, we propose participatory evaluation practices based on HDI design guidelines. The practices aim to create the conditions to participants from any profile collaborate with the design guidelines evaluation. The practices were used on a design problem involving interactions with coordinated visualization. The context of application was a visual analytic tool supporting decisions related to the production strategy in oil reservoirs with the participation of key stakeholders. The results indicate that participants were able to understand the design guidelines and took advantage from them in the design decisions.",
isbn="978-3-030-29381-9"
}
The design of visual analytics tools for facilitating human-data interaction (HDI) plays a key role to help people identifying useful knowledge from large masses of data. Designing data visualization based on guidelines is relevant. However, it is necessary to further promote the engagement of people in evaluation activities in the design process. Stakeholders need to comprehend the guidelines to help with the evaluation results and design decisions. In this paper, we propose participatory evaluation practices based on HDI design guidelines. The practices aim to create the conditions to participants from any profile collaborate with the design guidelines evaluation. The practices were used on a design problem involving interactions with coordinated visualization. The context of application was a visual analytic tool supporting decisions related to the production strategy in oil reservoirs with the participation of key stakeholders. The results indicate that participants were able to understand the design guidelines and took advantage from them in the design decisions.
|
Lombello, Luma Oliveira;
dos Reis, Julio Cesar;
Bonacin, Rodrigo
Soft Ontologies as Fuzzy RDF Statements (conference)
2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
IEEE,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Fuzzy,
Linked Data,
Ontology,
RDF,
Soft Ontologies,
Triples,
Triplification
)
@INPROCEEDINGS{lombello2019s,
author={L. O. {Lombello} and J. C. {dos Reis} and R. {Bonacin}},
booktitle={2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Soft Ontologies as Fuzzy RDF Statements},
year={2019},
volume={},
number={},
pages={289-294},
abstract={Soft ontologies enable knowledge representation in a flexible way because they are more susceptible to changes in their interrelationships over time. The transformation of a nonstandardized model of information into a computer-interpretable representation is not an easy task. Our investigation expands the concept of soft ontologies to fuzzy data represented as fuzzy RDF triples. In this paper, we elaborate a process that transforms a soft ontology implemented in the form of a matrix of probabilities into a fuzzy RDF dataset. The study presents the way the matrix of probabilities are used in the representation and how the data elements are triplified exploring fuzzy characteristics. We apply the proposal in an experimental scenario, by constructing a soft ontology which expresses a repertoire of actions in an mBot robot. Then, the facts are triplified to fuzzy RDF statements. Our results present original aspects related to the transformation of soft ontologies to fuzzy RDF triples.},
keywords={data handling;fuzzy set theory;knowledge representation;matrix algebra;ontologies (artificial intelligence);probability;soft ontology;fuzzy RDF statements;fuzzy data;fuzzy RDF triples;fuzzy RDF dataset;matrix of probabilities;data elements;fuzzy characteristics;mBot robot;Ontologies;Resource description framework;Automobiles;Proposals;Vocabulary;Data models;Computational modeling;Ontology, Soft Ontologies, Fuzzy, RDF, Triples, Triplification, Linked Data},
doi={10.1109/WETICE.2019.00067},
ISSN={2641-8169},
month={June},}
Soft ontologies enable knowledge representation in a flexible way because they are more susceptible to changes in their interrelationships over time. The transformation of a nonstandardized model of information into a computer-interpretable representation is not an easy task. Our investigation expands the concept of soft ontologies to fuzzy data represented as fuzzy RDF triples. In this paper, we elaborate a process that transforms a soft ontology implemented in the form of a matrix of probabilities into a fuzzy RDF dataset. The study presents the way the matrix of probabilities are used in the representation and how the data elements are triplified exploring fuzzy characteristics. We apply the proposal in an experimental scenario, by constructing a soft ontology which expresses a repertoire of actions in an mBot robot. Then, the facts are triplified to fuzzy RDF statements. Our results present original aspects related to the transformation of soft ontologies to fuzzy RDF triples.
|
Borges, Marcos Vinicius Macedo;
dos Reis, Julio Cesar;
Gribeler, Guilherme Pereira
Empirical Analysis of Semantic Metadata Extraction from Video Lecture Subtitles} (conference)
2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
IEEE,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Metadata extraction,
Ontology,
Semantic Annotation,
Video lectures
)
@INPROCEEDINGS{borges2019e,
author={M. V. {Macedo Borges} and J. C. {dos Reis} and G. {Pereira Gribeler}},
booktitle={2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Empirical Analysis of Semantic Metadata Extraction from Video Lecture Subtitles},
year={2019},
volume={},
number={},
pages={301-306},
abstract={Video lectures improve the learning experiences considering individual's needs and learning styles. However, the large amount of educational content and their availability in a fragmented way turns difficult the tasks of accessing these resources and understanding the concepts under study. Extracting relevant information from video lectures can be useful for recommendation purposes and for helping the interpretation of a concept in an exact moment of a lecture. The extraction of semantic metadata from a video natural language subtitle involves challenges in dealing with informal aspects of language and the detection of semantic classes from the text. In this paper, we conduct an empirical analysis of semantic annotation approaches supported by ontologies in the extraction of relevant metadata from textual transcriptions of video lectures in Computer Science. The obtained results indicate that existing tools can be useful for the studied task and the video lecture semantic metadata extraction process is highly influenced by the underlying ontologies.},
keywords={computer aided instruction;computer science education;information retrieval;interactive video;meta data;multimedia computing;natural language processing;text analysis;video lecture subtitles;video lectures;video natural language subtitle;semantic classes;semantic annotation approaches;semantic metadata extraction process;learning experiences;learning styles;educational content;ontologies;textual transcriptions;computer science;Ontologies;Semantics;Tools;Metadata;Computer science;Task analysis;Natural languages;Ontology, Semantic Annotation, Metadata extraction, Video lectures},
doi={10.1109/WETICE.2019.00069},
ISSN={2641-8169},
month={June}
}
Video lectures improve the learning experiences considering individual's needs and learning styles. However, the large amount of educational content and their availability in a fragmented way turns difficult the tasks of accessing these resources and understanding the concepts under study. Extracting relevant information from video lectures can be useful for recommendation purposes and for helping the interpretation of a concept in an exact moment of a lecture. The extraction of semantic metadata from a video natural language subtitle involves challenges in dealing with informal aspects of language and the detection of semantic classes from the text. In this paper, we conduct an empirical analysis of semantic annotation approaches supported by ontologies in the extraction of relevant metadata from textual transcriptions of video lectures in Computer Science. The obtained results indicate that existing tools can be useful for the studied task and the video lecture semantic metadata extraction process is highly influenced by the underlying ontologies.
|
Saraiva, Márcio de Carvalho
Relationships among educational materials through the extraction of implicit topics (phdthesis)
phdthesis,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Teaching materials,
Education - Technological innovations,
Data mining,
Classification
)
@phdthesis{Saraiva2019,
abstract= {Digital educational documents are growing in size and variety, catering to an increasingly heterogeneous public. As a consequence, students are facing difficulties to find their way through such material. Several scientists have created online repositories to store and facilitate access to these documents. Unfortunately, in most such repositories documents are stored in a haphazard way. This hampers distinguishing among contents of these materials, as well as their retrieval. As a consequence, students interested in accessing relevant material revert to (traditional) Web search engines, or to browsing through one specific repository. In most cases, the results of invoking those search engines are presented as a set (or disjunction) of potentially interesting documents, which may not be adapted to the learning purpose. One of the initiatives that have emerged to solve this problem involves the use of automatic classification algorithms, e.g. Topic Modeling and Topic Labeling. However, them remains the difficulty to analyze implicit relationships among topics of materials and lecturers from different places, even within a single repository. Moreover, these solutions have not been applied to sets of documents with different formats, and do not take advantage of additional information - e.g., metadata to extract topics. This work presents CIMAL, a framework for flexible analysis of educational material repositories; CIMAL combines semantic classification, taxonomies and graph structures to extract topics and their multiple relationships. We validated our proposal through a prototype that uses real materials from Coursera (Johns Hopkins University and University of Michigan) and Higher Education Institute, from Brazil. As far as we known, this is the first time that both slide and video features guide text analysis, topic classification techniques and relationship discovery among documents. The elicitation of topics covered in various educational documents and of their potential relationships can support teachers and students in undertaking academic activities that are more dynamic than conventional ones – e.g., in which new relationships are found between different subjects from different sources. This can also make it easier to search the most appropriate items in educational repositories to learn new concepts, enhancing the development of new courses. From the computational point of view, this research contributes to the improvement of techniques for handling unstructured documents and documents of different formats.},
author = {Márcio de Carvalho Saraiva},
date = {2019-08-14},
keyword = {Teaching materials;Education - Technological innovations;Data mining;Classification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2019/TeseMarcio.pdf},
school = {University of Campinas - Institute of Computing},
title = {Relationships among educational materials through the extraction of implicit topics},
year = {2019}
}
Digital educational documents are growing in size and variety, catering to an increasingly heterogeneous public. As a consequence, students are facing difficulties to find their way through such material. Several scientists have created online repositories to store and facilitate access to these documents. Unfortunately, in most such repositories documents are stored in a haphazard way. This hampers distinguishing among contents of these materials, as well as their retrieval. As a consequence, students interested in accessing relevant material revert to (traditional) Web search engines, or to browsing through one specific repository. In most cases, the results of invoking those search engines are presented as a set (or disjunction) of potentially interesting documents, which may not be adapted to the learning purpose. One of the initiatives that have emerged to solve this problem involves the use of automatic classification algorithms, e.g. Topic Modeling and Topic Labeling. However, them remains the difficulty to analyze implicit relationships among topics of materials and lecturers from different places, even within a single repository. Moreover, these solutions have not been applied to sets of documents with different formats, and do not take advantage of additional information - e.g., metadata to extract topics. This work presents CIMAL, a framework for flexible analysis of educational material repositories; CIMAL combines semantic classification, taxonomies and graph structures to extract topics and their multiple relationships. We validated our proposal through a prototype that uses real materials from Coursera (Johns Hopkins University and University of Michigan) and Higher Education Institute, from Brazil. As far as we known, this is the first time that both slide and video features guide text analysis, topic classification techniques and relationship discovery among documents. The elicitation of topics covered in various educational documents and of their potential relationships can support teachers and students in undertaking academic activities that are more dynamic than conventional ones – e.g., in which new relationships are found between different subjects from different sources. This can also make it easier to search the most appropriate items in educational repositories to learn new concepts, enhancing the development of new courses. From the computational point of view, this research contributes to the improvement of techniques for handling unstructured documents and documents of different formats.
|
dos Santos, Andressa Cristina;
Maike, Vanessa Regina Margareth Lima;
Mendez Mendoza, Yusseli Lizeth;
da Silva, José Valderlei;
Bonacin, Rodrigo;
Dos Reis, Julio Cesar;
Baranauskas, Maria Cecília Calani
Inquiring Evaluation Aspects of Universal Design and Natural Interaction in Socioenactive Scenarios (conference)
Universal Access in Human-Computer Interaction. Theory, Methods and Tools,
Springer,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Accessibility,
Interaction evaluation,
Ubiquitous computing,
Pervasive,
Natural User Interfaces,
Universal Design,
Universal Access
)
@InProceedings{dosSantos2019i,
author="dos Santos, Andressa Cristina
and Maike, Vanessa Regina Margareth Lima
and M{\'e}ndez Mendoza, Yusseli Lizeth
and da Silva, Jos{\'e} Valderlei
and Bonacin, Rodrigo
and Dos Reis, Julio Cesar
and Baranauskas, Maria Cec{\'i}lia Calani",
editor="Antona, Margherita
and Stephanidis, Constantine",
title="Inquiring Evaluation Aspects of Universal Design and Natural Interaction in Socioenactive Scenarios",
booktitle="Universal Access in Human-Computer Interaction. Theory, Methods and Tools",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="39--56",
abstract="New technologies and ubiquitous systems present new forms and modalities of interaction. Evaluating such systems, particularly in the novel socioenactive scenario, poses a difficult issue, as existing instruments do not capture all aspects intrinsic to such scenario. One of the key aspects is the wide range of characteristics and needs of both users and technology involved. In this paper, we are concerned with aspects of both Universal Design (UD) and Natural User Interfaces (NUIs). We present a case study where we applied, within a socioenactive scenario, evaluation instruments relying on principles and heuristics from these areas. The scenario involved six children from a hospital that treats craniofacial deformities, playing in a rich interactive environment with displays and plush animals that respond to hugs. Our results based on the analysis of the evaluation conducted in the case study suggest informed recommendations of how to use the evaluation instruments in the context of socioenactive systems and their limitations.",
isbn="978-3-030-23560-4"
}
New technologies and ubiquitous systems present new forms and modalities of interaction. Evaluating such systems, particularly in the novel socioenactive scenario, poses a difficult issue, as existing instruments do not capture all aspects intrinsic to such scenario. One of the key aspects is the wide range of characteristics and needs of both users and technology involved. In this paper, we are concerned with aspects of both Universal Design (UD) and Natural User Interfaces (NUIs). We present a case study where we applied, within a socioenactive scenario, evaluation instruments relying on principles and heuristics from these areas. The scenario involved six children from a hospital that treats craniofacial deformities, playing in a rich interactive environment with displays and plush animals that respond to hugs. Our results based on the analysis of the evaluation conducted in the case study suggest informed recommendations of how to use the evaluation instruments in the context of socioenactive systems and their limitations.
|
Ramos, Pedro Alan T.;
dos Reis, Julio Cesar;
de Souza dos Santos, Antonio Alberto;
Schiozer, Denis José
Participatory Design of System Messages in Petroleum Fields Management Software (conference)
Human-Computer Interaction. Design Practice in Contemporary Societies,
Springer,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Participatory design,
Reservoir simulations,
Braindrawing",
Brainwriting,
Scientific software,
Application System Messages
)
@InProceedings{ramos2019p,
author="Ramos, Pedro Alan T.
and dos Reis, Julio Cesar
and de Souza dos Santos, Antonio Alberto
and Schiozer, Denis Jos{\'e}",
editor="Kurosu, Masaaki",
title="Participatory Design of System Messages in Petroleum Fields Management Software",
booktitle="Human-Computer Interaction. Design Practice in Contemporary Societies",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="459--475",
abstract="Users face difficulties in understanding the progress of simulation tasks in oil reservoirs. It is necessary to turn clear to users when some task suffers errors because this is time consuming. In this paper, we propose the use of participatory design to conceive Application System Messages (ASMs) in software tools implemented to support studies related to Numerical Simulation and Management of Petroleum Reservoirs. We explored braindrawing and brainwriting techniques to acquire early concepts for a redesign of the ASMs' presentation and content. Our obtained results indicate that the use of participatory practices is useful to improve the redesign of ASMs in our study context.",
isbn="978-3-030-22636-7"
}
Users face difficulties in understanding the progress of simulation tasks in oil reservoirs. It is necessary to turn clear to users when some task suffers errors because this is time consuming. In this paper, we propose the use of participatory design to conceive Application System Messages (ASMs) in software tools implemented to support studies related to Numerical Simulation and Management of Petroleum Reservoirs. We explored braindrawing and brainwriting techniques to acquire early concepts for a redesign of the ASMs' presentation and content. Our obtained results indicate that the use of participatory practices is useful to improve the redesign of ASMs in our study context.
|
Caceffo, Ricardo;
Alves Moreira, Eliana;
Bonacin, Rodrigo;
dos Reis, Julio Cesar;
Luque Carbajal, Marleny;
D'Abreu, João Vilhete V.;
Brennand, Camilla V. L. T.;
Lombello, Luma;
Valente, José Armando;
Baranauskas, Maria Cecíia Calani
Collaborative Meaning Construction in Socioenactive Systems: Study with the mBot (conference)
Learning and Collaboration Technologies. Designing Learning Experiences,
Springer,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Enactive,
Educational,
Robots,
Interactive design,
Evaluation,
Ontologies,
Emotions,
HCI
)
@InProceedings{caceffo2019c,
author="Caceffo, Ricardo
and Alves Moreira, Eliana
and Bonacin, Rodrigo
and dos Reis, Julio Cesar
and Luque Carbajal, Marleny
and D'Abreu, Jo{\~a}o Vilhete V.
and Brennand, Camilla V. L. T.
and Lombello, Luma
and Valente, Jos{\'e} Armando
and Baranauskas, Maria Cec{\'i}lia Calani",
editor="Zaphiris, Panayiotis
and Ioannou, Andri",
title="Collaborative Meaning Construction in Socioenactive Systems: Study with the mBot",
booktitle="Learning and Collaboration Technologies. Designing Learning Experiences",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="237--255",
abstract="The design of interactive systems concerned with the impact of the technology on the human agent as well as the effect of the human experience on the technology is not a trivial task. Our investigation goes towards a vision of socioenactive systems, by supporting and identifying how a group of people can dynamically and seamlessly interact with the technology. In this paper, we elaborate a set of guidelines to design socioenactive systems. We apply them in the construction of a technological framework situated in an educational environment for children around the age of 5 (N = 25). The scenario was supported by educational robots, programmed to perform a set of actions mimicking human emotional expressions. The system was designed to shape the robots' behavior according to the feedback of children's responses in iterative sessions. This entails a complete cycle, where the robot impacts the children and is affected by their experiences. We found that children create hypotheses to make sense of the robot's behavior. Our results present original aspects related to a social enactive system.",
isbn="978-3-030-21814-0"
}
The design of interactive systems concerned with the impact of the technology on the human agent as well as the effect of the human experience on the technology is not a trivial task. Our investigation goes towards a vision of socioenactive systems, by supporting and identifying how a group of people can dynamically and seamlessly interact with the technology. In this paper, we elaborate a set of guidelines to design socioenactive systems. We apply them in the construction of a technological framework situated in an educational environment for children around the age of 5 (N = 25). The scenario was supported by educational robots, programmed to perform a set of actions mimicking human emotional expressions. The system was designed to shape the robots' behavior according to the feedback of children's responses in iterative sessions. This entails a complete cycle, where the robot impacts the children and is affected by their experiences. We found that children create hypotheses to make sense of the robot's behavior. Our results present original aspects related to a social enactive system.
|
Victorelli, E.Z.;
Dos Reis, J.C.;
Souza Santos, A.A.;
Schiozer, D.J.
Design process for human-data interaction: Combining guidelines with semio-participatory techniques (conference)
ICEIS 2019 - Proceedings of the 21st International Conference on Enterprise Information Systems,
ICEIS,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Human-Data Interaction,
Design Approaches,
Visual Analytics
)
@CONFERENCE{Victorelli2019d,
author={Victorelli, E.Z. and Dos Reis, J.C. and Souza Santos, A.A. and Schiozer, D.J.},
title={Design process for human-data interaction: Combining guidelines with semio-participatory techniques},
journal={ICEIS 2019 - Proceedings of the 21st International Conference on Enterprise Information Systems},
year={2019},
volume={2},
pages={410-421},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85067430856&partnerID=40&md5=f5ef73373a1a2196c5c421dece0d9a1b},
author_keywords={Design Approaches; Human-Data Interaction; Visual Analytics},
document_type={Conference Paper},
source={Scopus}
}
The complexity of analytically reasoning to extract and identify useful knowledge from large masses of data requires that the design of visual analytics tools addresses challenges of facilitating human-data interaction (HDI). Designing data visualisation based on guidelines is fast and low-cost, but does not favour the engagement of people in the process. In this paper, we propose a design process to integrate design based on guidelines with participatory design practices. We investigate, and when necessary, adapt existing practices for each step of our design process. The process was evaluated on a design problem involving a visual analytics tool supporting decisions related to the production strategy in oil reservoirs with the participation of key stakeholders. The generated prototype was tested with adapted participatory evaluation practices. The obtained results indicate participants’ satisfaction with the design practices used and detected the fulfilment of users’ needs. The design process and the associated practices may serve as a basis for improving the HDI in other contexts.
|
Venero, Sheila Katherine;
dos Reis, Julio Cesar;
Montecchi, Leonardo;
Rubira, Cecília Mary Fischer
Towards a Metamodel for Supporting Decisions in Knowledge-Intensive Processes (conference)
Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing,
ACM,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
knowledge-intensive process,
business process management systems,
process-aware information systems,
business process modeling,
case management,
knowledge management
)
@inproceedings{venero2019t,
author = {Venero, Sheila Katherine and Reis, Julio Cesar dos and Montecchi, Leonardo and Rubira, Cec\'{\i}lia Mary Fischer},
title = {Towards a Metamodel for Supporting Decisions in Knowledge-Intensive Processes},
year = {2019},
isbn = {9781450359337},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3297280.3297290},
doi = {10.1145/3297280.3297290},
booktitle = {Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing},
pages = {75–84},
numpages = {10},
keywords = {knowledge-intensive process, business process management systems, process-aware information systems, business process modeling, case management, knowledge management},
location = {Limassol, Cyprus},
series = {SAC ’19}
}
Knowledge-intensive processes (KiPs) cannot be fully specified at design time because not all information about the process is available prior to its execution. At runtime, new information emerges reflecting environment changes or unexpected outcomes. The structure of this kind of processes varies from case to case and it is defined step-by-step based on knowledge worker's decisions made after analyzing the current situation. These decisions rely on the knowledge worker's experience and available information. Current process management approaches still need to adequately address the complex characteristics of knowledge-intensive processes, such as their unpredictability, emergency, non-repeatability, and dynamism. This paper proposes a metamodel for representing KiPs aiming to help knowledge workers during the decision-making process. Domain and organizational knowledge are modeled by objectives and tactics. The metamodel supports the definition of objectives, metrics, tactics, goals and strategies at runtime according to a specific situation. Also, it includes concepts related to context and environment elements, business artifacts, roles and rules. The feasibility of our model was evaluated via a proof of concept in the medical domain.
|
Wang, Y.;
Dos Reis, J.C.;
Borggren, K.M.;
Vaz Salles, M.A.;
Medeiros, C.B.;
Zhou, Y.
Modeling and building IoT data platforms with actor-oriented databases (conference)
Advances in Database Technology,
EDBT,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
)
@CONFERENCE{Wang2019m,
author={Wang, Y. and Dos Reis, J.C. and Borggren, K.M. and Vaz Salles, M.A. and Medeiros, C.B. and Zhou, Y.},
title={Modeling and building IoT data platforms with actor-oriented databases},
journal={Advances in Database Technology - EDBT},
year={2019},
volume={2019-March},
pages={512-523},
doi={10.5441/002/edbt.2019.47},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85064947935&doi=10.5441%2f002%2fedbt.2019.47&partnerID=40&md5=3b80806b1652019da853394ca911f61b},
document_type={Conference Paper},
source={Scopus}
}
Vast amounts of data are being generated daily with the adoption of Internet-of-Things (IoT) solutions in an ever-increasing number of application domains. There are problems associated with all stages of the lifecycle of these data (e.g., capture, curation and preservation). Moreover, the volume, variety, dynamicity and ubiquity of IoT data present additional challenges to their usability, prompting the need for constructing scalable dataintensive IoT data management and processing platforms. This paper presents a novel approach to model and build IoT data platforms based on the characteristics of an Actor-Oriented Database (AODB). We take advantage of two complementary case studies – in structural health monitoring and beef cattle tracking and tracing – to describe novel software requirements introduced by IoT data processing. Our investigation illustrates the challenges and benefits provided by AODB to meet these requirements in terms of modeling and IoT-based systems implementation. Obtained results reveal the advantages of using AODB in IoT scenarios and lead to principles on how to effectively use an actor model to design and implement IoT data platforms.
|
Moreira, Eliana;
DOS REIS, Julio;
Baranauskas, Maria Cecília
Tangible Artifacts and the Evaluation of Affective States by children (journal)
Brazilian Journal of Computers in Education,
RBIE,
journal,
2019.
(
Abstract |
Links |
BibTeX |
Tags:
Tangible interfaces,
Evaluation,
Affective states,
Ludic
)
@article{moreira2019t,
author = {Eliana Moreira e Julio DOS REIS e Maria Cecília Baranauskas},
title = {Artefatos Tangíveis e a Avaliação de Estados Afetivos por Crianças},
journal = {Revista Brasileira de Informática na Educação},
volume = {27},
number = {01},
year = {2019},
keywords = {Tangible interfaces; Evaluation; Affective states; Ludic},
abstract = {Sistemas computacionais contemporâneos e ubíquos demandam cada vez mais avaliações que consideram aspectos para além da ergonomia, usabilidade e acessibilidade, para incluir também meios de entender o estado afetivo dos envolvidos na interação. Contudo, principalmente quando as partes envolvidas são crianças, é necessário promover meios lúdicos e acessíveis para envolver as pessoas nas atividades de avaliação, pois espera-se que a ferramenta utilizada na avaliação permita que os envolvidos se expressem de acordo com sua idade e compreensão. Trabalhos existentes propõem soluções abstratas que dificultam a compreensão e a participação das pessoas na expressão de estados afetivos. Neste artigo, desenvolvemos e avaliamos o ambiente TangiSAM, que engloba conjuntos de bonecos tridimensionais concretos que se utilizam de tecnologias tangíveis que permitem efetuar avaliação de estados afetivos de maneira lúdica. Conduzimos um estudo em um espaço educativo real com crianças e professoras para entender se os artefatos tangíveis do TangiSAM favorecem uma melhor experiência de autoavaliação. Descobrimos que o TangiSAM obteve maior preferência pelos participantes quando comparado com outras propostas de representação de estados afetivos.},
issn = {2317-6121}, pages = {58} doi = {10.5753/rbie.2019.27.01.58},
url = {https://www.br-ie.org/pub/index.php/rbie/article/view/7753}
}
Modern and ubiquitous computational systems increasingly demand more evaluations, which consider aspects beyond ergonomy, usability and accessibility to include means of understanding the affective states of those involved in the interaction. Nevertheless, whenever the involved parties are predominantly children, it becomes necessary to promote ludic and accessible means of involving people in the evaluation activities, because it is expected that the assessment tool used allows all stakeholdersto express themselves according to their age and understanding. Existing studies have proposed abstract solutions that difficult the comprehension and participation of those involved in the expression of affective states. In this article, we developed and evaluated the TangiSAM environment, which includes sets of tridimensional concrete manikins that take advantage of tangible technologies, allowing the assessment of affective states in a ludic manner. We conducted an evaluation in a real-world educational setting, including both children and teachers, in order to understand whether the TangiSAM’s tangible artifacts favor a better selfevaluation experience. We found that TangiSAM was more frequently assigned as the most favorite by the participants in the comparison to other affective-state representation proposals.
|
2018 |
Virginio, Luiz;
dos Reis, Julio Cesar
Automated Coding of Medical Diagnostics from Free-Text: The Role of Parameters Optimization and Imbalanced Classes (conference)
Data Integration in the Life Sciences,
Springer,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Automated ICD coding,
Multi-label classification,
Imbalanced classes
)
@InProceedings{virginio2018a,
author="Virginio, Luiz
and dos Reis, Julio Cesar",
editor="Auer, S{\"o}ren
and Vidal, Maria-Esther",
title="Automated Coding of Medical Diagnostics from Free-Text: The Role of Parameters Optimization and Imbalanced Classes",
booktitle="Data Integration in the Life Sciences",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="122--134",
abstract="The extraction of codes from Electronic Health Records (EHR) data is an important task because extracted codes can be used for different purposes such as billing and reimbursement, quality control, epidemiological studies, and cohort identification for clinical trials. The codes are based on standardized vocabularies. Diagnostics, for example, are frequently coded using the International Classification of Diseases (ICD), which is a taxonomy of diagnosis codes organized in a hierarchical structure. Extracting codes from free-text medical notes in EHR such as the discharge summary requires the review of patient data searching for information that can be coded in a standardized manner. The manual human coding assignment is a complex and time-consuming process. The use of machine learning and natural language processing approaches have been receiving an increasing attention to automate the process of ICD coding. In this article, we investigate the use of Support Vector Machines (SVM) and the binary relevance method for multi-label classification in the task of automatic ICD coding from free-text discharge summaries. In particular, we explored the role of SVM parameters optimization and class weighting for addressing imbalanced class. Experiments conducted with the Medical Information Mart for Intensive Care III (MIMIC III) database reached 49.86{\%} of f1-macro for the 100 most frequent diagnostics. Our findings indicated that optimization of SVM parameters and the use of class weighting can improve the effectiveness of the classifier.",
isbn="978-3-030-06016-9"
}
The extraction of codes from Electronic Health Records (EHR) data is an important task because extracted codes can be used for different purposes such as billing and reimbursement, quality control, epidemiological studies, and cohort identification for clinical trials. The codes are based on standardized vocabularies. Diagnostics, for example, are frequently coded using the International Classification of Diseases (ICD), which is a taxonomy of diagnosis codes organized in a hierarchical structure. Extracting codes from free-text medical notes in EHR such as the discharge summary requires the review of patient data searching for information that can be coded in a standardized manner. The manual human coding assignment is a complex and time-consuming process. The use of machine learning and natural language processing approaches have been receiving an increasing attention to automate the process of ICD coding. In this article, we investigate the use of Support Vector Machines (SVM) and the binary relevance method for multi-label classification in the task of automatic ICD coding from free-text discharge summaries. In particular, we explored the role of SVM parameters optimization and class weighting for addressing imbalanced class...
|
Carvalho, Lucas
Reproducibility and Reuse of Experiments in eScience: Workflows, Ontologies and Scripts (phdthesis)
University of Campinas - Institute of Computing,
phdthesis,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
scientific workflows,
ontologies,
reuse,
reproducibility
)
@phdthesis{Carvalho2018b,
abstract = {Scripts and Scientific Workflow Management Systems (SWfMSs) are common approaches that have been used to automate the execution flow of processes and data analysis in scientific (computational) experiments. Although widely used in many disciplines, scripts are hard to understand, adapt, reuse, and reproduce. For this reason, several solutions have been proposed to aid experiment reproducibility for script-based environments. However, they neither allow to fully document the experiment nor do they help when third parties want to reuse just part of the code. SWfMSs, on the other hand, help documentation and reuse by supporting scientists in the design and execution of their experiments, which are specified and run as interconnected (reusable) workflow components (a.k.a. building blocks). While workflows are better than scripts for understandability and reuse, they still require additional documentation. During experiment design, scientists frequently create workflow variants, e.g., by changing workflow components. Reuse and reproducibility require understanding and tracking variant provenance, a time-consuming task. This thesis aims to support reproducibility and reuse of computational experiments. To meet these challenges, we address two research problems: (1) understanding a computational experiment, and (2) extending a computational experiment. Our work towards solving these problems led us to choose workflows and ontologies to answer both problems. The main contributions of this thesis are thus: (i) to present the requirements for the conversion of script to reproducible research; (ii) to propose a methodology that guides the scientists through the process of conversion of script-based experiments into reproducible workflow research objects; (iii) to design and implement features for quality assessment of computational experiments; (iv) to design and implement W2Share, a framework to support the conversion methodology, which exploits tools and standards that have been developed by the scientific community to promote reuse and reproducibility; (v) to design and implement OntoSoft-VFF, a framework for capturing information about software and workflow components to support scientists manage workflow exploration and evolution. Our work is showcased via use cases in Molecular Dynamics, Bioinformatics and Weather Forecasting.},
author = {Lucas Carvalho},
date = {2018-12-14},
keyword = {scientific workflows; ontologies; reuse; reproducibility},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2018/carvalho-lucas-thesis-2018.pdf},
school = {University of Campinas - Institute of Computing},
title = {Reproducibility and Reuse of Experiments in eScience: Workflows, Ontologies and Scripts},
year = {2018}
}
Scripts and Scientific Workflow Management Systems (SWfMSs) are common approaches that have been used to automate the execution flow of processes and data analysis in scientific (computational) experiments. Although widely used in many disciplines, scripts are hard to understand, adapt, reuse, and reproduce. For this reason, several solutions have been proposed to aid experiment reproducibility for script-based environments. However, they neither allow to fully document the experiment nor do they help when third parties want to reuse just part of the code. SWfMSs, on the other hand, help documentation and reuse by supporting scientists in the design and execution of their experiments, which are specified and run as interconnected (reusable) workflow components (a.k.a. building blocks). While workflows are better than scripts for understandability and reuse, they still require additional documentation. During experiment design, scientists frequently create workflow variants, e.g., by changing workflow components. Reuse and reproducibility require understanding and tracking variant provenance, a time-consuming task. This thesis aims to support reproducibility and reuse of computational experiments. To meet these challenges, we address two research problems: (1) understanding a computational experiment, and (2) extending a computational experiment. Our work towards solving these problems led us to choose workflows and ontologies to answer both problems. The main contributions of this thesis are thus: (i) to present the requirements for the conversion of script to reproducible research; (ii) to propose a methodology that guides the scientists through the process of conversion of script-based experiments into reproducible workflow research objects; (iii) to design and implement features for quality assessment of computational experiments; (iv) to design and implement W2Share, a framework to support the conversion methodology, which exploits tools and standards that have been developed by the scientific community to promote reuse and reproducibility; (v) to design and implement OntoSoft-VFF, a framework for capturing information about software and workflow components to support scientists manage workflow exploration and evolution. Our work is showcased via use cases in Molecular Dynamics, Bioinformatics and Weather Forecasting.
|
D’Abreu, J. V. V.;
DOS REIS, J. C.
Robótica Pedagógica no NIED: contribuições e perspectivas futuras ()
Tecnologia e Educação: passado, presente e o que está por vir,
NIED,
,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
)
A Robótica Educacional (RE) é uma área de conhecimento que integra diversas disciplinas. Nas escolas, muitas vezes, ela é inserida como forma de se buscar uma abordagem interdisciplinar e propiciar o uso de tecnologias na educação. Essas tecnologias envolvem o uso de kits e de materiais para a montagem de robôs, software para programá-los e, consequentemente, computadores (nos seus mais variados modelos e formatos) para programar a automação e o controle do robô construído. Adicionalmente, esses aspectos devem ser orientados por uma metodologia para potencializar/qualificar o uso da RE como ferramenta capaz de diversificar e enriquecer o ambiente de ensino e aprendizagem nos mais diferentes níveis, do básico ao superior...
|
Carvalho, Lucas A. M. C.;
Garijo, Daniel;
Medeiros, Claudia Bauzer;
Gil, Yolanda
Semantic Software Metadata for Workflow Exploration and Evolution (conference)
Proceedings of the 2018 IEEE 14th International Conference on eScience,
IEEE,
Amsterdam, the Netherlands, October 28-November 01,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
scientific workflows,
software metadata,
software functions,
software registries,
workflow evolution
)
@conference{Carvalho2018Semantic,
abstract = {Scientific workflow management systems play a major role in the design, execution and documentation of computational experiments. However, they have limited support for managing workflow evolution and exploration because they lack rich metadata for the software that implements workflow components. Such metadata could be used to support scientists in exploring local adjustments to a workflow, replacing components with similar software, or upgrading components upon release of newer software versions. To address this challenge, we propose OntoSoft-VFF (Ontology for Software Version, Function and Functionality), a software metadata repository designed to capture information about software and workflow components that is important for managing workflow exploration and evolution. Our approach uses a novel ontology to describe the functionality and evolution through time of any software used to create workflow components. OntoSoft-VFF is implemented as an online catalog that stores semantic metadata for software to enable workflow exploration through understanding of software functionality and evolution. The catalog also supports comparison and semantic search of software metadata. We showcase OntoSoft-VFF using machine learning workflow examples. We validate our approach by testing that a workflow system could compare differences in software metadata, explain software updates, and describe the general functionality of workflow steps.},
address = {Amsterdam, the Netherlands, October 28-November 01},
author = {Lucas A. M. C. Carvalho and Khalid Belhajjame and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 2018 IEEE 14th International Conference on eScience},
date = {2018-10-28},
keyword = {scientific workflows, software metadata, software functions, software registries, workflow evolution},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2018/semantic-software-metadata-for-workflow-exploration-and-evolution-camera-ready.pdf},
publisher = {IEEE},
title = {Semantic Software Metadata for Workflow Exploration and Evolution},
year = {2018}
}
Scientific workflow management systems play a major role in the design, execution and documentation of computational experiments. However, they have limited support for managing workflow evolution and exploration because they lack rich metadata for the software that implements workflow components. Such metadata could be used to support scientists in exploring local adjustments to a workflow, replacing components with similar software, or upgrading components upon release of newer software versions. To address this challenge, we propose OntoSoft-VFF (Ontology for Software Version, Function and Functionality), a software metadata repository designed to capture information about software and workflow components that is important for managing workflow exploration and evolution. Our approach uses a novel ontology to describe the functionality and evolution through time of any software used to create workflow components. OntoSoft-VFF is implemented as an online catalog that stores semantic metadata for software to enable workflow exploration through understanding of software functionality and evolution. The catalog also supports comparison and semantic search of software metadata. We showcase OntoSoft-VFF using machine learning workflow examples. We validate our approach by testing that a workflow system could compare differences in software metadata, explain software updates, and describe the general functionality of workflow steps.
|
Justo, Andrey Victor;
dos Reis, Julio Cesar;
Calado, Ivo;
Bonacin, Rodrigo;
Jensen, Felipe Rodrigues
Exploring Ontologies to Improve the Empathy of Interactive Bots (conference)
2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
IEEE,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Interactive bots,
Affectivity,
Ontologies,
SWRL
)
@INPROCEEDINGS{justo2018e,
author={A. V. {Justo} and J. {Cesar dos Reis} and I. {Calado} and R. {Bonacin} and F. R. {Jensen}},
booktitle={2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Exploring Ontologies to Improve the Empathy of Interactive Bots},
year={2018},
volume={},
number={},
pages={261-266},
abstract={Bots are virtual agents that people can interact with text messages. They are mostly made with the aim of mimicking a person in conversations. Although several studies have devised natural language processing techniques for the creation of bots, few studies explore the use of ontologies in the development of novel context-aware interactive bots. In this article, we propose a software architecture that allows ontology-based interpretation of several types of data (audio, video, and text) from the bot's environment. We define formal concept-based rules to express affective behavior aiming to improve the empathy of bots. The proposed technique relies on Semantic technologies such as OWL and SWRL languages. This technique is illustrated in an interaction scenario.},
keywords={interactive systems;knowledge representation languages;natural language processing;ontologies (artificial intelligence);software architecture;interaction scenario;virtual agents;text messages;natural language;context-aware interactive bots;ontology-based interpretation;formal concept-based rules;OWL;SWRL languages;semantic technologies;Ontologies;Computer architecture;Personal digital assistants;Semantics;Proposals;Neurons;Face;Interactive bots;Affectivity;Ontogologies;SWRL},
doi={10.1109/WETICE.2018.00057},
ISSN={1524-4547},
month={June},}
Bots are virtual agents that people can interact with text messages. They are mostly made with the aim of mimicking a person in conversations. Although several studies have devised natural language processing techniques for the creation of bots, few studies explore the use of ontologies in the development of novel context-aware interactive bots. In this article, we propose a software architecture that allows ontology-based interpretation of several types of data (audio, video, and text) from the bot's environment. We define formal concept-based rules to express affective behavior aiming to improve the empathy of bots. The proposed technique relies on Semantic technologies such as OWL and SWRL languages. This technique is illustrated in an interaction scenario.
|
Destro, Juliana Medeiros;
dos Santos, Gabriel Oliveira;
dos Reis, Julio Cesar;
Torres, Ricardo da S.;
Carvalho, Ariadne Maria B. R.;
Ricarte, Ivan Luiz Marques
EVOCROS: Results for OAEI 2018 (conference)
The Thirteenth International Workshop on Ontology Matching - International Semantic Web Conference ISWC-2018,
CEUR-WS,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
cross-lingual matching,
semantic matching,
background knowledge
)
@inproceedings{destro2018e,
title={EVOCROS: Results for OAEI 2018},
author={Destro, Juliana Medeiros and dos Santos, Gabriel Oliveira and dos Reis, Julio Cesar and Torres, Ricardo da S. and Carvalho, Ariadne Maria B. R. and Ricarte, Ivan Luiz Marques},
year={2018}
}
This paper describes EVOCROS, a cross-lingual ontology alignment system suited to create mappings between ontologies described in different natural language. Our tool combines semantic and syntactic similarity measures in a weighted average metric. The semantic is computed via NASARI vectors used together with BabelNet, which is a domain-neutral semantic network. The tool employs automatic translation to a pivot language to consider the similarity. EVOCROS was tested and obtained high quality alignment in the Multifarm dataset. We discuss the experimented configurations and the achieved results in OAEI 2018. This is our first participation in OAEI.
|
Abdessalem, Talel;
Medeiros, Claudia B.;
Cellary, W. ;
Gancarski, W.;
Manouvrier, M.;
”Rukoz;
M.”;
”Zam;
M.”
The Database Version Approach: Overview and Future directions (conference)
34ème Conférence sur la Gestion de Données - Principes, Technologies et Applications (BDA 2018),
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Database versions
)
@inproceedings{ abmece18,
author = { Abdessalem, Talel and Medeiros, Claudia Bauzer and Cellary, W. and Gancarski, W. and Manouvrier, M. and Rukoz, M. and Zam, M.},
booktitle = {34ème Conférence sur la Gestion de Données - Principes, Technologies et Applications (BDA 2018)},
pages = {1-10},
address={Bucarest},
title = {{ The Database Version Approach: Overview and Future directions }},
year = {2018}
}
|
Dos Reis, Julio Cesar;
Bonacin, Rodrigo;
Hornung, Heiko Horst;
Baranauskas, Maria Cecília Calani
Intenticons: Participatory selection of emoticons for communication of intentions (journal)
Computers in Human Behavior,
Elsevier,
journal,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Emoticons,
Meanings,
Intentions,
Pragmatics,
Computer-mediated communication,
User participation
)
@article{DOSREIS2018146,
title = "Intenticons: Participatory selection of emoticons for communication of intentions",
journal = "Computers in Human Behavior",
volume = "85",
pages = "146 - 162",
year = "2018",
issn = "0747-5632",
doi = "https://doi.org/10.1016/j.chb.2018.03.046",
url = "http://www.sciencedirect.com/science/article/pii/S0747563218301511",
author = "Julio Cesar [dos Reis] and Rodrigo Bonacin and Heiko Horst Hornung and M. Cecília C. Baranauskas",
keywords = "Emoticons, Meanings, Intentions, Pragmatics, Computer-mediated communication, User participation",
abstract = "Previous studies have emphasised that emoticons are able to express more than emotions, assuming a central role on computer mediation communication. Explicit consideration of intentions in computer systems might play a significant role for improving communication and collaboration. Nevertheless, web-mediated communication lacks elements that are natural in face-to-face conversation for signalling intention. In this article, we propose so-called Intenticons as a set of emoticons designed (and/or selected) to communicate intentions as an interactive mechanism to support users in expressing intentions. This study presents an experimental analysis to evaluate whether Intenticons designed in a participatory way convey intentions better than emoticons selected by designers in a non-participatory way. We rely on a theoretical framework based on Speech Act Theory and Semiotics to categorize different classes of intentions. The achieved results, based on statistical tests, revealed that the Intenticons were more adequate for most of the intention classes. Our findings demonstrated the value of the user involvement for obtaining adequate emoticons in intention sharing."
}
Previous studies have emphasised that emoticons are able to express more than emotions, assuming a central role on computer mediation communication. Explicit consideration of intentions in computer systems might play a significant role for improving communication and collaboration. Nevertheless, web-mediated communication lacks elements that are natural in face-to-face conversation for signalling intention. In this article, we propose so-called Intenticons as a set of emoticons designed (and/or selected) to communicate intentions as an interactive mechanism to support users in expressing intentions. This study presents an experimental analysis to evaluate whether Intenticons designed in a participatory way convey intentions better than emoticons selected by designers in a non-participatory way. We rely on a theoretical framework based on Speech Act Theory and Semiotics to categorize different classes of intentions. The achieved results, based on statistical tests, revealed that the Intenticons were more adequate for most of the intention classes. Our findings demonstrated the value of the user involvement for obtaining adequate emoticons in intention sharing.
|
Saraiva, Márcio de Carvalho;
Medeiros, Claudia Bauzer
Relating educational materials via extraction of their topics (conference)
Proceedings of the VLDB 2018 Ph.D. Workshop, August 27, 2018,
Rio de Janeiro, Rio de Janeiro, Brazil,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification
)
@conference{deSaraiva2018,
abstract = {Digital educational documents are growing in size and variety,
and scientists are facing difficulties to find their way
through them. One of the initiatives that have emerged to
solve this problem involves the use of automatic classification
algorithms. However, it is difficult to analyze implicit
relationships among topics of materials. This paper presents
CIMAL, a framework for enabling flexible access to material
stored in arbitrary repositories. CIMAL combines semantic
classification, taxonomies and graphs to elicit relationships
among topics of educational documents. We validated
our work using materials from Coursera (courses offered by
Johns Hopkins University and University of Michigan) and
a Higher Education Institute, from Brazil.},
address = {Rio de Janeiro, Rio de Janeiro, Brazil},
author = {Márcio de Carvalho Saraiva and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the VLDB 2018 Ph.D. Workshop, August 27, 2018. Rio de
Janeiro, Brazil},
date = {2018-08-27},
keyword = {Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2018/Marcio-PHDVLDB.pdf"},
organization = {IEEE},
title = {Relating educational materials via extraction of their topics},
year = {2018}
}
Digital educational documents are growing in size and variety, and scientists are facing difficulties to find their way through them. One of the initiatives that have emerged to solve this problem involves the use of automatic classification algorithms. However, it is difficult to analyze implicit relationships among topics of materials. This paper presents CIMAL, a framework for enabling flexible access to material stored in arbitrary repositories. CIMAL combines semantic classification, taxonomies and graphs to elicit relationships among topics of educational documents. We validated our work using materials from Coursera (courses offered by Johns Hopkins University and University of Michigan) and a Higher Education Institute, from Brazil.
|
Saraiva, Márcio de Carvalho;
Medeiros, Claudia Bauzer
Correlating Educational Documents from Different Sources Through Graphs and Taxonomies (conference)
Proceedings of the SBC 33rd Brazilian Symposium on Databases (SBBD) 2018, Rio de Janeiro, Rio de Janeiro, Brazil,
Rio de Janeiro, Rio de Janeiro, Brazil,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification
)
@conference{deSaraiva2018b,
abstract = {Digital educational documents are growing in size and variety, and
scientists are facing difficulties to find their way through them. One of the initiatives
that have emerged to solve this problem involves the use of automatic
classification algorithms. However, it is difficult to analyze implicit relationships
among topics of materials. This paper presents CIMAL, a framework for
enabling flexible access to material stored in arbitrary repositories. CIMAL
combines semantic classification, taxonomies and graphs to elicit relationships
among topics of educational documents. We validated our work using materials
from Coursera (courses offered by Johns Hopkins University and University of
Michigan) and a Higher Education Institute, from Brazil.},
address = {Rio de Janeiro, Rio de Janeiro, Brazil},
author = {Márcio de Carvalho Saraiva and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the SBC 33rd Brazilian Symposium on Databases (SBBD) 2018, Rio de
Janeiro, Brazil},
date = {2018-08-25},
keyword = {Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2018/Marcio-SBBD2018.pdf"},
organization = {IEEE},
title = {Correlating Educational Documents from Different Sources Through Graphs and Taxonomies},
year = {2018}
}
Digital educational documents are growing in size and variety, and scientists are facing difficulties to find their way through them. One of the initiatives that have emerged to solve this problem involves the use of automatic classification algorithms. However, it is difficult to analyze implicit relationships among topics of materials. This paper presents CIMAL, a framework for enabling flexible access to material stored in arbitrary repositories. CIMAL combines semantic classification, taxonomies and graphs to elicit relationships among topics of educational documents. We validated our work using materials from Coursera (courses offered by Johns Hopkins University and University of Michigan) and a Higher Education Institute, from Brazil.
|
de Araújo, Ricardo José;
Dos Reis, Julio Cesar;
Bonacin, Rodrigo
Understanding interface recoloring aspects by colorblind people: a user study (journal)
Universal Access in the Information Society,
Springer,
journal,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Colorblind people,
Interface recoloring,
Accessibility,
User study,
User preferences,
Recoloring algorithms,
Interface adaptation
)
@article{deAraujo2018u,
title={Understanding interface recoloring aspects by colorblind people: a user study},
author={de Ara{\'u}jo, Ricardo Jos{\'e} and Dos Reis, Julio Cesar and Bonacin, Rodrigo},
journal={Universal Access in the Information Society},
pages={1--18},
year={2018},
publisher={Springer}
}
The current web technologies make intensive the use of colors in web pages. Nowadays, colors are essential in the design of interfaces and play a central role in the distinction and comprehension of information. However, this affects colorblind users, i.e., those who have difficulties in recognizing or distinguishing colors. This paper presents a user study involving colorblind people to empirically investigate several aspects related to the recoloring of web interfaces. We aim to detect limitations, barriers, and needs about these users’ interaction with web pages. Our employed evaluation investigates indicators of satisfaction (contentment) and pleasantness (enjoyable) for several scenarios of interface recoloring adaptation. We found a ranking of application for interface adaptation techniques with the use of recoloring algorithms. The obtained results reveal the advantages of considering the colorblind individual’s needs and preferences for the development of adaptive systems. Our contribution can enhance web interface accessibility based on user interface adaptation techniques.
|
Gonçalves, Fabrício Matheus;
Jensen, Felipe Rodrigues;
dos Reis, Julio Cesar;
Baranauskas, Maria Cecília Calani
Enhancing Problem Clarification Artifacts with Online Deliberation (conference)
Proceedings of the 13th International Conference on Software Technologies - Volume 1: ICSOFT, 288-295, 2018, Porto, Portugal},
SciTePress,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Online Deliberation,
Socially Aware Computing,
Organisational Semiotics
)
@conference{gonçalves2018e,
author={Fabrício Matheus Gon\c{C}alves. and Felipe Rodrigues Jensen. and Julio Cesar dos Reis. and Maria Cecília Calani Baranauskas.},
title={Enhancing Problem Clarification Artifacts with Online Deliberation},
booktitle={Proceedings of the 13th International Conference on Software Technologies - Volume 1: ICSOFT,},
year={2018},
pages={288-295},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006869103220329},
isbn={978-989-758-320-9}
}
Information system design demands understanding requirements from diversified stakeholders. As an initial step, the problem clarification is essential to obtain a shared view of the involved problems and solutions. Several techniques have been proposed and practiced by the systems engineering community for problem clarification. While existing literature has brought problem clarification artifacts via a online computational system, stakeholders still lack means of meaning negotiation practices that usually happen in face-to-face meetings. This paper proposes a deliberation model integrated to the online use of problem clarification artifacts. The deliberation provides a collaborative process for building common ground for reflection. The proposed model illustrates the possibilities of deliberation in statements created in three artefacts of the Organizational Semiotics: Stakeholder Identification, Evaluation Frame and Semiotic Framework.
|
dos Reis, Julio Cesar;
de Brito, Mario Ferreira
Transparência para Humanos e Máquinas: Um framework para Publicar Dados Abertos Interconectados Semanticamente Descritos (workshop)
Anais do VI Workshop de Transparência em Sistemas,
SBC,
workshop,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
)
@inproceedings{dosReis2018t,
author = {Julio Cesar dos Reis and Mario de Brito},
title = {Transparência para Humanos e Máquinas: Um framework para Publicar Dados Abertos Interconectados Semanticamente Descritos},
booktitle = {Anais do VI Workshop de Transparência em Sistemas},
location = {Natal},
year = {2018},
keywords = {},
issn = {2595-6140},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil}
doi = {10.5753/wtrans.2018.3094},
url = {https://portaldeconteudo.sbc.org.br/index.php/wtrans/article/view/3094}
}
A Web Semântica permite que a semântica dos dados seja descrita explicitamente para humanos e máquinas. Transparência de dados requer a publicação estruturada dos dados para outros sistemas. Dados abertos interconectados que são disponibilizados na Web sem restrições de “copyright” são a chave para atingirmos mecanismos de transparência. Mas muitas vezes os dados a serem publicados pela organização se encontram em diversos sistemas isolados. Neste artigo, propomos um framework para possibilitar a publicação de dados interconectados abertos, partindo de diversas fontes de dados visando a transparência. O contexto de estudo é em uma universidade pública onde dados de natureza distinta necessitam ser disponibilizados aos usuários de diferentes perfis.
|
de França, Breno Bernard Nicolau;
dos Reis, Julio Cesar;
de Azevedo, Rodolfo Jardim
Desafios Sociotécnicos e Prospecções para Promover Transparência de Dados na Universidade (workshop)
Anais do VI Workshop de Transparência em Sistemas,
SBC,
workshop,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
)
@inproceedings{deFrança2018d,
author = {Breno Bernard Nicolau de França and Julio Cesar dos Reis and Rodolfo Jardim de Azevedo},
title = {Desafios Sociotécnicos e Prospecções para Promover Transparência de Dados na Universidade},
booktitle = {Anais do VI Workshop de Transparência em Sistemas},
location = {Natal},
year = {2018},
keywords = {},
issn = {2595-6140},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil},
doi = {10.5753/wtrans.2018.3091},
url = {https://portaldeconteudo.sbc.org.br/index.php/wtrans/article/view/3091}
}
Transparência de dados é um aspecto chave para o desenvolvimento de diversos setores da sociedade. Em universidades públicas, a transparência potencializa o conhecimento sobre o que é desenvolvido permitindo o entendimento de onde os recursos são investidos. Neste artigo, apresentamos os resultados de um estudo sobre transparência, com base em dados coletados de duas iniciativas para difundir a cultura da transparência na UNICAMP. Identificamos desafios sociotécnicos, bem como apontamos uma solução arquitetural para facilitar os processos associados á transparência dentro da universidade e promover acesso irrestrito e facilitado aos dados públicos.
|
Dos Reis, Julio Cesar;
Bonacin, Rodrigo;
Jensen, Cristiane Josely;
Hornung, Heiko Horst;
Baranauskas, Maria Cecília Calani
Design of Interactive Mechanisms to Support the Communication of Users’ Intentions (journal)
Interacting with Computers,
Oxford University Press,
journal,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
graphical user interfaces; user interface design; computer supported collaborative work
)
@article{dosReis2018d,
author = {dos Reis, Julio Cesar and Bonacin, Rodrigo and Jensen, Cristiane Josely and Hornung, Heiko Horst and Baranauskas, Maria Cecília Calani},
title = "{Design of Interactive Mechanisms to Support the Communication of Users’ Intentions}",
journal = {Interacting with Computers},
volume = {30},
number = {4},
pages = {315-335},
year = {2018},
month = {07},
abstract = "{The communication and interpretation of users’ intentions play a key role in collaborative web discussions. However, existing computational mechanisms are not effective in supporting the expression of intentions during collaborations. In this article, we present the design of interactive mechanisms that allow users to make their intentions explicit. The study considered the domain of collaborative forums of software developers. The mechanisms design was based on semiotic principles and artifacts. They were implemented and evaluated to assess their effectiveness. We investigated to which extent the mechanisms support users in the task of interpreting message exchanges in forums that make use of the mechanisms. The results reveal the suitability of the designed interface elements, enabling more meaningful and successful communication.}",
issn = {0953-5438},
doi = {10.1093/iwc/iwy013},
url = {https://doi.org/10.1093/iwc/iwy013},
eprint = {https://academic.oup.com/iwc/article-pdf/30/4/315/25243814/iwy013.pdf},
}
The communication and interpretation of users’ intentions play a key role in collaborative web discussions. However, existing computational mechanisms are not effective in supporting the expression of intentions during collaborations. In this article, we present the design of interactive mechanisms that allow users to make their intentions explicit. The study considered the domain of collaborative forums of software developers. The mechanisms design was based on semiotic principles and artifacts. They were implemented and evaluated to assess their effectiveness. We investigated to which extent the mechanisms support users in the task of interpreting message exchanges in forums that make use of the mechanisms. The results reveal the suitability of the designed interface elements, enabling more meaningful and successful communication.
|
Bonacin, Rodrigo;
Calado, Ivo;
Dos Reis, Julio Cesar
A Metamodel for Supporting Interoperability in Heterogeneous Ontology Networks (conference)
Digitalisation, Innovation, and Transformation,
Springer,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
Ontology Chart,
OWL ontologies,
Soft ontologies,
Metamodeling
)
@InProceedings{bonacin2018m,
author="Bonacin, Rodrigo
and Calado, Ivo
and dos Reis, Julio Cesar",
editor="Liu, Kecheng
and Nakata, Keiichi
and Li, Weizi
and Baranauskas, Cecilia",
title="A Metamodel for Supporting Interoperability in Heterogeneous Ontology Networks",
booktitle="Digitalisation, Innovation, and Transformation",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="187--196",
abstract="Ontologies are central artifacts in modern information systems. Ontology networks consider the coexistence of different ontology models in the same conceptual space. It is relevant that computational systems specified with distinct models based on different methods, as well as divergent metaphysical assumptions, exchange data to interoperate one with the other. However, there is a lack of techniques to enable the adequate conciliation among models. In this paper, we propose and formalize a metamodel to enable the construction of data models aiming to support the interoperability at the technical level. We present the use of our metamodel to conciliate, without explicit transformations, Ontology Charts from Organizational Semiotics with Semantic Web OWL ontologies and less structured models such as soft ontologies. Our results indicate the possibility of identifying an entity from one model into another, enabling data exchange and interpretation in heterogeneous ontology network.",
isbn="978-3-319-94541-5"
}
Ontologies are central artifacts in modern information systems. Ontology networks consider the coexistence of different ontology models in the same conceptual space. It is relevant that computational systems specified with distinct models based on different methods, as well as divergent metaphysical assumptions, exchange data to interoperate one with the other. However, there is a lack of techniques to enable the adequate conciliation among models. In this paper, we propose and formalize a metamodel to enable the construction of data models aiming to support the interoperability at the technical level. We present the use of our metamodel to conciliate, without explicit transformations, Ontology Charts from Organizational Semiotics with Semantic Web OWL ontologies and less structured models such as soft ontologies. Our results indicate the possibility of identifying an entity from one model into another, enabling data exchange and interpretation in heterogeneous ontology network.
|
BONACIN,Rodrigo;
DOS REIS,Julio Cesar;
Edemar,MENDES P.;
NABUCO,Olga
Exploring intentions on electronic health records retrieval. Studies with collaborative scenarios (journal)
Ingenierie des Systemes d'Information,
Lavoisier,
journal,
2018.
(
Abstract |
Links |
BibTeX |
Tags:
information retrieval,
electronic health records,
information sharing,
query expansion,
intentions,
illocutions,
speech acts theory
)
@article{bonacin2018e,
author={BONACIN,Rodrigo and DOS REIS,Julio Cesar and Edemar,MENDES P. and NABUCO,Olga},
year={2018},
title={Exploring intentions on electronic health records retrieval. Studies with collaborative scenarios},
journal={Ingenierie des Systemes d'Information},
volume={23},
number={2},
pages={111-135},
note={Copyright - Copyright Lavoisier 2018; Última atualização em - 2019-01-15},
abstract={Indépendamment des aspects positifs apportés par les dossiers médicaux partagés (DMP) informatisés, les professionnels de santé sont confrontés à des difficultés dans la sélection des documents pertinents, surtout dans la cadre des grandes bases de données lors des activités de collaboration. Dans le cadre de cet article, nous nous sommes appuyés sur le développement d’un mécanisme innovant de Recherche d’Information (RI) qui explore la représentation formelle des intentions dans les DMP. Cette recherche repose sur la théorie organisationnelle de la sémiotique et de la théorie des actes de langage afin de catégoriser plusieurs types d’intentions. Notre étude porte sur des problèmes de définition, sélection et classement des résultats de recherche et nous examinons les intentions explicitement déclarées par les utilisateurs. Notre principale contribution est le développement d’un système RI qui vient en appui au partage des connaissances de groupe via DMP. Pour évaluer cette proposition, nous avons mené une étude expérimentale d’après un référentiel DMP réel de dossiers médicaux. Deux scénarios sont définis qui impliquent un groupe interdisciplinaire de professionnels de santé. Les résultats obtenus sont analysés à partir de mesures comme la précision et le rappel et ont démontré l’efficacité de cette solution. Despite the potential benefits of Electronic Health Records (EHRs), health care pro fessionals face difficulties in the selection of relevant documents in huge repositories during collaborative activities. In this article, we investigate the development of an innovative Information Retrieval (IR) and sharing mechanism that explores the formal representation of inten- tions in EHRs. To this end, this research relies on Organizational Semiotics and Speech Acts Theory. We defined an algorithm to filter and sort search results relying on intention classes explicitly declared as query parameters in the search mechanism. As our main contribution, we developed the SiRBI IR system for supporting group knowledge sharing through EHRs. To evaluate the proposal, we conducted an experimental study using a realworld EHR repository in two search scenarios, which involve an interdisciplinary group. The obtained results demonstrated the effectiveness of the solution.},
keywords={Engineering; récupération de l’information; dossier médical électronique; expansion de requêtes; les intentions; la théorie des actes de langage; information retrieval; electronic health records; information sharing; query expansion; intentions; illocutions; speech acts theory; Repositories; Collaboration; Searching},
isbn={16331311},
language={English},
url={http://www.iieta.org/journals/isi/paper/10.3166/ISI.23.2.111-135}
}
Indépendamment des aspects positifs apportés par les dossiers médicaux partagés (DMP) informatisés, les professionnels de santé sont confrontés à des difficultés dans la sélection des documents pertinents, surtout dans la cadre des grandes bases de données lors des activités de collaboration. Dans le cadre de cet article, nous nous sommes appuyés sur le développement d’un mécanisme innovant de Recherche d’Information (RI) qui explore la représentation formelle des intentions dans les DMP. Cette recherche repose sur la théorie organisationnelle de la sémiotique et de la théorie des actes de langage afin de catégoriser plusieurs types d’intentions. Notre étude porte sur des problèmes de définition, sélection et classement des résultats de recherche et nous examinons les intentions explicitement déclarées par les utilisateurs. Notre principale contribution est le développement d’un système RI qui vient en appui au partage des connaissances de groupe via DMP. Pour évaluer cette proposition, nous avons mené une étude expérimentale d’après un référentiel DMP réel de dossiers médicaux. Deux scénarios sont définis qui impliquent un groupe interdisciplinaire de professionnels de santé. Les résultats obtenus sont analysés à partir de mesures comme la précision et le rappel et ont démontré l’efficacité de cette solution. Despite the potential benefits of Electronic Health Records (EHRs), health care pro fessionals face difficulties in the selection of relevant documents in huge repositories during collaborative activities. In this article, we investigate the development of an innovative Information Retrieval (IR) and sharing mechanism that explores the formal representation of inten- tions in EHRs. To this end, this research relies on Organizational Semiotics and Speech Acts Theory. We defined an algorithm to filter and sort search results relying on intention classes explicitly declared as query parameters in the search mechanism. As our main contribution, we developed the SiRBI IR system for supporting group knowledge sharing through EHRs. To evaluate the proposal, we conducted an experimental study using a realworld EHR repository in two search scenarios, which involve an interdisciplinary group. The obtained results demonstrated the effectiveness of the solution.
|
2017 |
Carvalho, Lucas A. M. C.;
Essawy, Bakinam T.;
Garijo, Daniel;
Medeiros, Claudia Bauzer;
Gil, Yolanda
Requirements for Supporting the Iterative Exploration of Scientific Workflow Variants (conference)
2017 Workshop on Capturing Scientific Knowledge (SciKnow), held in conjunction with the ACM International Conference on Knowledge Capture (K-CAP),
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Knowledge Capture, Scientific Workflows, Workflow Variants, Workshop
)
@conference{Carvalho2017b,
abstract = {Workflow systems support scientists in capturing computational experiments and managing their execution. However, such systems are not designed to help scientists create and track the many related workflows that they build as variants, trying different software implementations and distinct ways to process data and deciding what to do next by looking at previous workflow results. An initial workflow will be changed to create many new variants thereof that differ from each other in one or more steps. Our goal is to support scientists in the iterative design of computational experiments by assisting them in the creation and management of workflow variants. In this paper, we present several use cases for creating workflow variants in hydrology, from which we specify requirements for workflow variants. We also discuss major research directions to address these requirements.},
author = {Lucas A. M. C. Carvalho and Bakinam T. Essawy and Daniel Garijo and Claudia Bauzer Medeiros and Yolanda Gil},
booktitle = {2017 Workshop on Capturing Scientific Knowledge (SciKnow), held in conjunction with the ACM International Conference on Knowledge Capture (K-CAP)},
date = {2017-12-04},
keyword = {Knowledge Capture, Scientific Workflows, Workflow Variants, Workshop},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2018/01/workflow-variants-sciknow-2017-camera-ready.pdf},
pages = {1-8},
title = {Requirements for Supporting the Iterative Exploration of Scientific Workflow Variants},
year = {2017}
}
Workflow systems support scientists in capturing computational experiments and managing their execution. However, such systems are not designed to help scientists create and track the many related workflows that they build as variants, trying different software implementations and distinct ways to process data and deciding what to do next by looking at previous workflow results. An initial workflow will be changed to create many new variants thereof that differ from each other in one or more steps. Our goal is to support scientists in the iterative design of computational experiments by assisting them in the creation and management of workflow variants. In this paper, we present several use cases for creating workflow variants in hydrology, from which we specify requirements for workflow variants. We also discuss major research directions to address these requirements.
|
Santo, Jacqueline Midlej do Espírito;
Medeiros, Claudia Bauzer
Semantic Interoperability of Clinical Data (conference)
Lecture Notes in Bioinformatics (LNBI) - Proceedings of 12th International Conference on Data Integration in the Life Sciences,
Luxemburgo, Luxemburgo,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Data integration, healthcare
)
@conference{Santo2017,
abstract = {The interoperability of clinical information systems is particularly complicated due to the use of outdated technologies and the absence of consensus about standards. The literature applies standard-based approaches to achieve clinical data interoperability, but many systems do not adopt any standard, requiring a full redesigning process. Instead, we propose a generic computational approach that combines a hierarchical organization of mediator schemas to support the interoperability across distinct data sources. Second, our work takes advantage of knowledge bases to be linked to clinical data, and exploit these semantic linkages via queries. The paper shows case studies to validate our
proposal.},
address = {Luxemburgo, Luxemburgo},
author = {Jacqueline Midlej do Espírito Santo and Claudia Bauzer Medeiros},
booktitle = {Lecture Notes in Bioinformatics (LNBI) - Proceedings of 12th International Conference on Data Integration in the Life Sciences},
date = {2017-11-14},
editor = {Springer International Publishing AG},
keyword = {Data integration, healthcare},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/10/DILS-jacqueline.pdf},
title = {Semantic Interoperability of Clinical Data},
volume = {10649},
year = {2017}
}
The interoperability of clinical information systems is particularly complicated due to the use of outdated technologies and the absence of consensus about standards. The literature applies standard-based approaches to achieve clinical data interoperability, but many systems do not adopt any standard, requiring a full redesigning process. Instead, we propose a generic computational approach that combines a hierarchical organization of mediator schemas to support the interoperability across distinct data sources. Second, our work takes advantage of knowledge bases to be linked to clinical data, and exploit these semantic linkages via queries. The paper shows case studies to validate our proposal.
|
Moreira, Eliana Alves;
dos Reis, Julio Cesar;
Baranauskas, M. Cecília C.
TangiSAM: Tangible Artifacts for Evaluation of Affective States (conference)
Proceedings of the XVI Brazilian Symposium on Human Factors in Computing Systems,
ACM,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Tangible interfaces,
Affective states,
Evaluation
)
@inproceedings{moreira2017t,
author = {Moreira, Eliana Alves and dos Reis, Julio Cesar and Baranauskas, M. Cec\'{\i}lia C.},
title = {TangiSAM: Tangible Artifacts for Evaluation of Affective States},
year = {2017},
isbn = {9781450363778},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3160504.3160525},
doi = {10.1145/3160504.3160525},
booktitle = {Proceedings of the XVI Brazilian Symposium on Human Factors in Computing Systems},
articleno = {47},
numpages = {10},
keywords = {Tangible interfaces, Affective states, Evaluation},
location = {Joinville, Brazil},
series = {IHC 2017}
}
Evaluation of affective states is essential for assessing people's perceptions during activities and interaction experience. There is, however, a lack of playful and accessible proposals enabling children for example to complete evaluation activities thoroughly. This paper proposes the TangiSAM, a technological environment with tangible three-dimensional manikins representing the affective state in the dimensions of pleasure, arousal and dominance. We present the results of a study conducted to investigate the usage of our proposal in a real-world setting with children and teachers. Obtained results showed that the TangiSAM was more effective than other approaches for evaluation.
|
Diaz, Juan S. B.;
Medeiros, Claudia Bauzer
WorkflowHunt: combining keyword and semantic search in scientific workflow repositories (conference)
Proceedings of the IEEE 13th International Conference on eScience 2017,
IEEE,
Auckland, New Zealand,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Scientific Workflows, Semantic Annotation, Workflow Retrieval
)
@conference{Diaz2017,
abstract = {Scientific datasets, and the experiments that analyze them are growing in size and complexity, and scientists are facing difficulties to share such resources. Some initiatives have emerged to try to solve this problem. One of them involves the use of scientific workflows to represent and enact experiment execution. There is an increasing number of workflows that are potentially relevant for more than one scientific domain. However, it is hard to find workflows suitable for reuse given an experiment. Creating a workflow takes time and resources, and their reuse helps scientists to build new workflows faster and in a more reliable way. Search mechanisms in workflow repositories should provide different options for workflow discovery, but it is difficult for generic repositories to provide multiple mechanisms. This paper presents WorkflowHunt, a hybrid architecture for workflow search and discovery for generic repositories, which combines keyword and semantic search to allow finding relevant workflows using different search methods. We validated our architecture creating a prototype that uses real workflows and metadata from myExperiment, and compare search results via WorkflowHunt and via myExperiment’s search interface.},
address = {Auckland, New Zealand},
author = {Juan S. B. Diaz and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the IEEE 13th International Conference on eScience 2017},
date = {2017-10-24},
keyword = {Scientific Workflows, Semantic Annotation, Workflow Retrieval},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/10/PID4958635.pdf},
publisher = {IEEE},
title = {WorkflowHunt: combining keyword and semantic search in scientific workflow repositories},
year = {2017}
}
Scientific datasets, and the experiments that analyze them are growing in size and complexity, and scientists are facing difficulties to share such resources. Some initiatives have emerged to try to solve this problem. One of them involves the use of scientific workflows to represent and enact experiment execution. There is an increasing number of workflows that are potentially relevant for more than one scientific domain. However, it is hard to find workflows suitable for reuse given an experiment. Creating a workflow takes time and resources, and their reuse helps scientists to build new workflows faster and in a more reliable way. Search mechanisms in workflow repositories should provide different options for workflow discovery, but it is difficult for generic repositories to provide multiple mechanisms. This paper presents WorkflowHunt, a hybrid architecture for workflow search and discovery for generic repositories, which combines keyword and semantic search to allow finding relevant workflows using different search methods. We validated our architecture creating a prototype that uses real workflows and metadata from myExperiment, and compare search results via WorkflowHunt and via myExperiment’s search interface.
|
Saraiva, Márcio de Carvalho;
Medeiros, Claudia Bauzer
Finding out Topics in Educational Materials Using their Components (conference)
Proceedings of THE 47th Annual Frontiers in Education (FIE) Conference, October 18-21, 2017,
Indianapolis, Indiana, USA,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification
)
@conference{deSaraiva2017,
abstract = {The Web is witnessing an exponential growth of distributed and heterogeneous educational material. This hampers distinguishing among contents of these materials, as well as their retrieval. While information retrieval and classification mechanisms concentrate on corpus analysis, annotation approaches either target specific formats or require that a document follows interoperable standards. Rather than target only textual characteristics, our strategy is mainly based on components of educational material. The header, body, footer and numbering of slides and progress bar are examples of components of slides and videos. Though our work is general purpose, it is being tested against slides and videos from Coursera, a web platform that provides universal access to online education material and courses from universities and organizations around the world.},
address = {Indianapolis, Indiana, USA},
author = {Márcio de Carvalho Saraiva and Claudia Bauzer Medeiros},
booktitle = {Proceedings of THE 47th Annual Frontiers in Education (FIE) Conference, October 18-21, 2017},
date = {2017-10-17},
keyword = {Components, Content analysis and feature selection, educational material, Information extraction, Topics Classification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/07/ArtigoFIE2017-MarcioSaraiva.pdf},
organization = {IEEE},
title = {Finding out Topics in Educational Materials Using their Components},
year = {2017}
}
The Web is witnessing an exponential growth of distributed and heterogeneous educational material. This hampers distinguishing among contents of these materials, as well as their retrieval. While information retrieval and classification mechanisms concentrate on corpus analysis, annotation approaches either target specific formats or require that a document follows interoperable standards. Rather than target only textual characteristics, our strategy is mainly based on components of educational material. The header, body, footer and numbering of slides and progress bar are examples of components of slides and videos. Though our work is general purpose, it is being tested against slides and videos from Coursera, a web platform that provides universal access to online education material and courses from universities and organizations around the world.
|
Daltio, Jaudete
Views over Graph Databases: A Multifocus Approach for Heterogeneous Data (phdthesis)
University of Campinas - Institute of Computing,
phdthesis,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
graph database,
PhDThesis,
Views,
Multifocus
)
@phdthesis{Daltio2017,
abstract = {Scientific research has become data-intensive and data-dependent. This new research paradigm requires sophisticated computer science techniques and technologies to support the life cycle of scientific data and collaboration among scientists from distinct areas. A major requirement is that researchers working in data-intensive interdisciplinary teams demand construction of multiple perspectives of the world, built over the same datasets. Present solutions cover a wide range of aspects, from the design of interoperability standards to the use of non-relational database management systems. None of these efforts, however, adequately meet the needs of multiple perspectives, which are called foci in the thesis. Basically, a focus is designed/built to cater to a research group (even within a single project) that needs to deal with a subset of data of interest, under multiple aggregation/generalization levels. The definition and creation of a focus are complex tasks that require mechanisms and engines to manipulate multiple representations of the same real world phenomenon. This PhD research aims to provide multiple foci over heterogeneous data. To meet this challenge, we deal with four research problems. The first two were (1) choosing an appropriate data management paradigm; and (2) eliciting multifocus requirements. Our work towards solving these problems made as choose graph databases to answer (1) and the concept of views in relational databases for (2). However, there is no consensual data model for graph databases and views are seldom discussed in this context. Thus, research problems (3) and (4) are: (3) specifying an adequate graph data model and (4) defining a framework to handle views on graph databases. Our research in these problems results in the main contributions of this thesis: (i) to present the case for the use of graph databases in multifocus research as persistence layer -- a schemaless and relationship driven type of database that provides a full understanding of data connections; (ii) to define views for graph databases to support the need for multiple foci, considering graph data manipulation, graph algorithms and traversal tasks; (iii) to propose a property graph data model (PGDM) to fill the gap of absence of a full-fledged data model for graphs; (iv) to specify and implement a framework, named Graph-Kaleidoscope, that supports views over graph databases and (v) to validate our framework for real world applications in two domains -- biodiversity and environmental resources -- typical examples of multidisciplinary research that involve the analysis of interactions of phenomena using heterogeneous data.},
author = {Jaudete Daltio},
date = {2017-09-12},
keyword = {graph database; PhDThesis; Views; Multifocus},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/12/jaudete_daltio_tese.pdf},
school = {University of Campinas - Institute of Computing},
title = {Views over Graph Databases: A Multifocus Approach for Heterogeneous Data},
year = {2017}
}
Scientific research has become data-intensive and data-dependent. This new research paradigm requires sophisticated computer science techniques and technologies to support the life cycle of scientific data and collaboration among scientists from distinct areas. A major requirement is that researchers working in data-intensive interdisciplinary teams demand construction of multiple perspectives of the world, built over the same datasets. Present solutions cover a wide range of aspects, from the design of interoperability standards to the use of non-relational database management systems. None of these efforts, however, adequately meet the needs of multiple perspectives, which are called foci in the thesis. Basically, a focus is designed/built to cater to a research group (even within a single project) that needs to deal with a subset of data of interest, under multiple aggregation/generalization levels. The definition and creation of a focus are complex tasks that require mechanisms and engines to manipulate multiple representations of the same real world phenomenon. This PhD research aims to provide multiple foci over heterogeneous data. To meet this challenge, we deal with four research problems. The first two were (1) choosing an appropriate data management paradigm; and (2) eliciting multifocus requirements. Our work towards solving these problems made as choose graph databases to answer (1) and the concept of views in relational databases for (2). However, there is no consensual data model for graph databases and views are seldom discussed in this context. Thus, research problems (3) and (4) are: (3) specifying an adequate graph data model and (4) defining a framework to handle views on graph databases. Our research in these problems results in the main contributions of this thesis: (i) to present the case for the use of graph databases in multifocus research as persistence layer -- a schemaless and relationship driven type of database that provides a full understanding of data connections; (ii) to define views for graph databases to support the need for multiple foci, considering graph data manipulation, graph algorithms and traversal tasks; (iii) to propose a property graph data model (PGDM) to fill the gap of absence of a full-fledged data model for graphs; (iv) to specify and implement a framework, named Graph-Kaleidoscope, that supports views over graph databases and (v) to validate our framework for real world applications in two domains -- biodiversity and environmental resources -- typical examples of multidisciplinary research that involve the analysis of interactions of phenomena using heterogeneous data.
|
Destro, Juliana Medeiros;
dos Reis, Julio Cesar;
Carvalho, Ariadne Maria Brito Rizzoni;
Ricarte, Ivan Luiz Marques
Experimental studies for revealing key factors of cross-language ontology alignments (conference)
Brazilian Ontology Research Seminar (ONTOBRAS 2017),
CEUR-WS,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
)
@inproceedings{destro2017e,
title={Experimental studies for revealing key factors of cross-language ontology alignments},
author={Destro, Juliana Medeiros and dos Reis, Julio Cesar and Carvalho, Ariadne Maria Brito Rizzoni and Ricarte, Ivan Luiz Marques},
booktitle={Brazilian Ontology Research Seminar (ONTOBRAS 2017)},
year={2017}
}
Cross-language alignment between ontologies is relevant for the interoperability of systems in specific domains, such as in the life science domain. Although the literature has proposed techniques for the alignment of ontologies described in different languages, the influence of linguistic characteristics from domain-specific ontologies on such alignments has barely been appraised. This study proposes a series of experiments based on real-world mappings to understand the matching between ontologies in different languages. It investigates the role of a pivot-language related to the domain for the purpose of a fully automatic cross-language alignment. In particular, we analyse the influence of syntactic and semantic similarity methods and the structure of terms denoting concepts in ontologies. Experimental results, focused on the life science domain, indicate useful factors to take into account in the design of matching algorithms for domain-specific cross-language alignment.
|
Dos Reis, Julio Cesar;
Bonacin, Rodrigo;
Baranauskas, Maria Cecilia Calani
Recognizing Intentions in Free Text Messages: Studies with Portuguese Language (conference)
2017 IEEE 26th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
IEEE,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Illocutions,
Intention,
Pragmatics
)
@INPROCEEDINGS{dosReis2017r,
author={J. C. {Dos Reis} and R. {Bonacin} and M. C. {Calani Baranauskas}},
booktitle={2017 IEEE 26th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Recognizing Intentions in Free Text Messages: Studies with Portuguese Language},
year={2017},
volume={},
number={},
pages={302-307}
}
Recent literature indicates that user intention analysis brings benefits for several computational tasks including information retrieval and communication. However, intentions are expressed implicitly in natural language texts. Domain related specificities and cultural language aspects hamper their machine representation and interpretation. This requires thorough investigations of intention recognition methods in free text to permit further exploring them. In this paper, we propose a technique based on the matching with representative key phrases and semantic extension of terms to detect instances of intention classes in natural language sentences. We explore a multidimensional framework of illocution categorization to structure the distinct intention classes. The conducted experiments with Portuguese language datasets of different characteristics reveal the potentialities of our method when analyzing the outcome of state-ofthe- art machine-learning based text-mining techniques.
|
Tacioli, Leandro;
Toledo, Luís Felipe;
Medeiros, Claudia Bauzer
An Architecture for Animal Sound Identification based on Multiple Feature Extraction and Classification Algorithms (conference)
11th BreSci - Brazilian e-Science Workshop,
Sociedade Brasileira de Computação (SBC),
2017.
(
Abstract |
Links |
BibTeX |
Tags:
eScience, Feature Extraction, Pattern recognition
)
@conference{Tacioli2017,
abstract = {Automatic identification of animals is extremely useful for scientists, providing ways to monitor species and changes in ecological communities. The choice of effective audio features and classification techniques is a challenge on any audio recognition system, especially in bioacoustics that commonly uses several algorithms. This paper presents a novel software architecture that supports multiple feature extraction and classification algorithms to help on the identification of animal species from their recorded sounds. This architecture was implemented by the WASIS software, freely available on the Web.},
author = {Leandro Tacioli and Luís Felipe Toledo and Claudia Bauzer Medeiros},
booktitle = {11th BreSci - Brazilian e-Science Workshop},
date = {2017-07-06},
keyword = {eScience, Feature Extraction, Pattern recognition},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/06/Tacioli-BreSci2017.pdf},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {An Architecture for Animal Sound Identification based on Multiple Feature Extraction and Classification Algorithms},
year = {2017}
}
Automatic identification of animals is extremely useful for scientists, providing ways to monitor species and changes in ecological communities. The choice of effective audio features and classification techniques is a challenge on any audio recognition system, especially in bioacoustics that commonly uses several algorithms. This paper presents a novel software architecture that supports multiple feature extraction and classification algorithms to help on the identification of animal species from their recorded sounds. This architecture was implemented by the WASIS software, freely available on the Web.
|
Carvalho, Lucas A. M. C.;
Malaverri, Joana E. Gonzales;
Medeiros, Claudia Bauzer
Implementing W2Share: Supporting Reproducibility and Quality Assessment in eScience (conference)
11th BreSci - Brazilian e-Science Workshop,
Sociedade Brasileira de Computação (SBC),
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Data quality, Provenance Information, Scientific Workflows, Semantic Annotation, W2Share framework
)
@conference{Carvalho2017,
abstract = {An open problem in scientific community is that of supporting reproducibility and quality assessment of scientific experiments. Solutions need to
be able to help scientists to reproduce experimental procedures in a reliable manner and, at the same time, to provide mechanisms for documenting the experiments to enhance integrity and transparency. Moreover, solutions need to incorporate features that allow the assessment of procedures, data used and results of those experiments. In this context, we designed W2Share, a framework to meet these requirements. This paper introduces our first implementation of W2Share, which moreover guides scientists in step-by-step process to ensure reproducibility based on a script-to-workflow conversion strategy. W2Share also incorporates features that allow annotating experiments with quality information. We validate our prototype using a real-world scenario in Bioinformatics.},
author = {Lucas A. M. C. Carvalho and Joana E. Gonzales Malaverri and Claudia Bauzer Medeiros},
booktitle = {11th BreSci - Brazilian e-Science Workshop},
date = {2017-07-06},
keyword = {Data quality, Provenance Information, Scientific Workflows, Semantic Annotation, W2Share framework},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/05/w2share-bresci2017-camera-ready.pdf},
pages = {1-8},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Implementing W2Share: Supporting Reproducibility and Quality Assessment in eScience},
year = {2017}
}
An open problem in scientific community is that of supporting reproducibility and quality assessment of scientific experiments. Solutions need to be able to help scientists to reproduce experimental procedures in a reliable manner and, at the same time, to provide mechanisms for documenting the experiments to enhance integrity and transparency. Moreover, solutions need to incorporate features that allow the assessment of procedures, data used and results of those experiments. In this context, we designed W2Share, a framework to meet these requirements. This paper introduces our first implementation of W2Share, which moreover guides scientists in step-by-step process to ensure reproducibility based on a script-to-workflow conversion strategy. W2Share also incorporates features that allow annotating experiments with quality information. We validate our prototype using a real-world scenario in Bioinformatics.
|
Tacioli, Leandro
WASIS - Bioacoustic Species Identification based on Multiple Feature Extraction and Classification Algorithms (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Animals - Identification, Bioacoustics, Computer systems, Pattern recognition
)
@mastersthesis{Tacioli2017b,
abstract = {Automatic identification of animal species based on their sounds is one of the means to conduct research in bioacoustics. This research domain provides, for instance, ways to monitor rare and endangered species, to analyze changes in ecological communities, or ways to study the social meaning of animal calls in their behavioral contexts. Identification mechanisms are typically executed in two stages: feature extraction and classification. Both stages present challenges, in computer science and in bioacoustics. The choice of effective feature extraction and classification algorithms is a challenge on any audio recognition system, especially in bioacoustics. Considering the wide variety of animal groups studied, algorithms are tailored to specific groups. Audio classification techniques are also sensitive to the extracted features, and conditions surrounding the recordings. As a results, most bioacoustic softwares are not extensible, therefore limiting the kinds of recognition experiments that can be conducted. Given this scenario, this dissertation proposes a software architecture that allows multiple feature extraction, feature fusion and classification algorithms to support scientists and the general public on the identification of animal species through their recorded sounds. This architecture was implemented by the WASIS software, freely available on the Web. Since WASIS is open-source and expansible, experts can perform experiments with many combinations of pairs descriptor-classifier to choose the most appropriate ones for the identification of given animal sub-groups. A number of algorithms were implemented, serving as the basis for a comparative study that recommends sets of feature extraction and classification algorithms for three animal groups.},
author = {Leandro Tacioli},
date = {2017-07-03},
keyword = {Animals - Identification, Bioacoustics, Computer systems, Pattern recognition},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/07/LeandroTacioli-Mestrado.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {WASIS - Bioacoustic Species Identification based on Multiple Feature Extraction and Classification Algorithms},
year = {2017}
}
Automatic identification of animal species based on their sounds is one of the means to conduct research in bioacoustics. This research domain provides, for instance, ways to monitor rare and endangered species, to analyze changes in ecological communities, or ways to study the social meaning of animal calls in their behavioral contexts. Identification mechanisms are typically executed in two stages: feature extraction and classification. Both stages present challenges, in computer science and in bioacoustics. The choice of effective feature extraction and classification algorithms is a challenge on any audio recognition system, especially in bioacoustics. Considering the wide variety of animal groups studied, algorithms are tailored to specific groups. Audio classification techniques are also sensitive to the extracted features, and conditions surrounding the recordings. As a results, most bioacoustic softwares are not extensible, therefore limiting the kinds of recognition experiments that can be conducted. Given this scenario, this dissertation proposes a software architecture that allows multiple feature extraction, feature fusion and classification algorithms to support scientists and the general public on the identification of animal species through their recorded sounds. This architecture was implemented by the WASIS software, freely available on the Web. Since WASIS is open-source and expansible, experts can perform experiments with many combinations of pairs descriptor-classifier to choose the most appropriate ones for the identification of given animal sub-groups. A number of algorithms were implemented, serving as the basis for a comparative study that recommends sets of feature extraction and classification algorithms for three animal groups.
|
Dos Reis, Julio Cesar;
Jensen, Cristiane Josely;
Bonacin, Rodrigo;
Hornung, Heiko;
Calani Baranauskas, Maria Cecília
Participatory Icons Specification for Expressing Intentions in Computer-Mediated Communications (conference)
Enterprise Information Systems,
Springer,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Icons,
Emoticons,
Meanings,
Intentions,
Pragmatics,
Communication,
HCI
)
@InProceedings{dosReis2017p,
author="Dos Reis, Julio Cesar
and Jensen, Cristiane Josely
and Bonacin, Rodrigo
and Hornung, Heiko
and Calani Baranauskas, Maria Cec{\'i}lia",
editor="Hammoudi, Slimane
and Maciaszek, Leszek A.
and Missikoff, Michele M.
and Camp, Olivier
and Cordeiro, Jos{\'e}",
title="Participatory Icons Specification for Expressing Intentions in Computer-Mediated Communications",
booktitle="Enterprise Information Systems",
year="2017",
publisher="Springer International Publishing",
address="Cham",
pages="414--435",
abstract="Web-mediated conversations require treating intentions more explicitly. Literature lacks adequate design methods and interactive mechanisms to support users in the sharing of intentions. This research assumes that icons representing emotions play a central role as means for aiding users to convey intentions in communication tasks. This article proposes a method to specify emoticons for representing the users' intentions, named ``intenticons''. The work explores Speech Act Theory and Semiotics in a conceptual framework to structure classes of intentions. We conduct participatory activities to experiment the method with 40 users. The obtained intenticons were evaluated with a different set of users to reveal their effectiveness. The obtained results suggest the feasibility of the method to select and enhance emoticons for intention expression. Evaluations point out that most of the achieved intenticons indicate an acceptable degree of representativeness for the intention classes.",
isbn="978-3-319-62386-3"
}
Web-mediated conversations require treating intentions more explicitly. Literature lacks adequate design methods and interactive mechanisms to support users in the sharing of intentions. This research assumes that icons representing emotions play a central role as means for aiding users to convey intentions in communication tasks. This article proposes a method to specify emoticons for representing the users' intentions, named ``intenticons''. The work explores Speech Act Theory and Semiotics in a conceptual framework to structure classes of intentions. We conduct participatory activities to experiment the method with 40 users. The obtained intenticons were evaluated with a different set of users to reveal their effectiveness. The obtained results suggest the feasibility of the method to select and enhance emoticons for intention expression. Evaluations point out that most of the achieved intenticons indicate an acceptable degree of representativeness for the intention classes.
|
Filho, Francisco José Nardi
Hybrid Narrative and Clinical Knowledge Base for Emergency Medicine Training (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Artificial intelligence - Medical applications, Emergency medicine, Expert systems (Computer science), Information storage and retrieval systems, Information systems
)
@mastersthesis{NardiFilho2017,
abstract = {Software for medical training usually follows two types of approaches for the representation of its data. One type is the software of simulation-based training or virtual patients - which have highly structured representations of the clinical data and simulation plans. Another type is some systems which focus on the narrative of a clinical case in free-text format - e.g., the Jacinto emergency medicine learning environment. In this case, the clinical data mixes with the narrative in unstructured format. Thus, we propose a model for a hybrid narrative and clinical knowledge base for emergency medicine training that combines both of the approaches. We hypothesize that by connecting narratives with structured clinical information, we can take advantage of the strongest points of each approach. On the one hand, structured clinical data offers flexibility for the production of case variations and alternative plans, which gives machine more autonomy to assess user performance. On the other hand, free-text narratives enable the introduction of real scenario relevant aspects and context, beyond clinical data. In this work, we present a practical experiment involving the database of the Jacinto emergency medicine learning environment.},
author = {Francisco José Nardi Filho},
date = {2017-05-19},
keyword = {Artificial intelligence - Medical applications, Emergency medicine, Expert systems (Computer science), Information storage and retrieval systems, Information systems},
link = {http://www.repositorio.unicamp.br/bitstream/REPOSIP/325067/1/Nardi%20Filho_FranciscoJose_M.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Hybrid Narrative and Clinical Knowledge Base for Emergency Medicine Training},
year = {2017}
}
Software for medical training usually follows two types of approaches for the representation of its data. One type is the software of simulation-based training or virtual patients - which have highly structured representations of the clinical data and simulation plans. Another type is some systems which focus on the narrative of a clinical case in free-text format - e.g., the Jacinto emergency medicine learning environment. In this case, the clinical data mixes with the narrative in unstructured format. Thus, we propose a model for a hybrid narrative and clinical knowledge base for emergency medicine training that combines both of the approaches. We hypothesize that by connecting narratives with structured clinical information, we can take advantage of the strongest points of each approach. On the one hand, structured clinical data offers flexibility for the production of case variations and alternative plans, which gives machine more autonomy to assess user performance. On the other hand, free-text narratives enable the introduction of real scenario relevant aspects and context, beyond clinical data. In this work, we present a practical experiment involving the database of the Jacinto emergency medicine learning environment.
|
de Araújo, Ricardo José;
dos Reis, Julio Cesar;
Bonacin, Rodrigo
Colors Similarity Computation for User Interface Adaptation (conference)
International Conference on Universal Access in Human-Computer Interaction,
Springer,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Accessibility,
Color blindness,
Interface adaptation,
Color similarity
)
@ARTICLE{deAraujo2017333,
author={de Araújo, R.J. and dos Reis, J.C. and Bonacin, R.},
title={Colors similarity computation for user interface adaptation},
journal={Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
year={2017},
volume={10277 LNCS},
pages={333-345},
doi={10.1007/978-3-319-58706-6_27},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85025145600&doi=10.1007%2f978-3-319-58706-6_27&partnerID=40&md5=69b24b51ec60b56bc0da373c245d0f5f},
abstract={Color blind people face various difficulties interacting with web systems. Interface adaptation techniques designed to recoloring images and web interfaces may deal with several color blindness visualization issues. However, different situations, preferences and individual needs make complex choosing the most suitable recoloring technique. This article proposes an original algorithm to compute similarity between colors. We aim to support the decision process of select the most suitable adaptation technique according to the type of color blindness and interaction context. The algorithm ponders arguments for taking the users’ preferences and limitations into account. Our experimental analysis implement various configurations by testing the weights in the color distance calculation according to the colorblindness type. The obtained results reveal the advantages of considering the type of colorblindness in the color similarity computation. © Springer International Publishing AG 2017.},
author_keywords={Accessibility; Color blindness; Color similarity; Interface adaptation},
sponsors={},
publisher={Springer Verlag},
document_type={Conference Paper},
source={Scopus},
}
Color blind people face various difficulties interacting with web systems. Interface adaptation techniques designed to recoloring images and web interfaces may deal with several color blindness visualization issues. However, different situations, preferences and individual needs make complex choosing the most suitable recoloring technique. This article proposes an original algorithm to compute similarity between colors. We aim to support the decision process of select the most suitable adaptation technique according to the type of color blindness and interaction context. The algorithm ponders arguments for taking the users’ preferences and limitations into account. Our experimental analysis implement various configurations by testing the weights in the color distance calculation according to the colorblindness type. The obtained results reveal the advantages of considering the type of colorblindness in the color similarity computation.
|
Gonçalves, F.M.;
Duarte, E.F.;
Dos Reis, J.C.;
Baranauskas, M.C.C.
An analysis of online discussion platforms for academic deliberation support (conference)
International Conference on Social Computing and Social Media,
Springer,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
Academic deliberation,
Collaboration,
Considerate,
Debate hub,
HCI,
Interaction design,
Social computing,
Trello
)
@ARTICLE{Gonçalves201791,
author={Gonçalves, F.M. and Duarte, E.F. and Dos Reis, J.C. and Baranauskas, M.C.C.},
title={An analysis of online discussion platforms for academic deliberation support},
journal={Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
year={2017},
volume={10283 LNCS},
pages={91-109},
doi={10.1007/978-3-319-58562-8_8},
url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85025124946&doi=10.1007%2f978-3-319-58562-8_8&partnerID=40&md5=65d4ad5eb5570cd951773bc6d72565ca},
abstract={Asynchronous online discussions are relevant for supporting and promoting debates among people. Nevertheless, achieving beneficial discussion requires adequate software applications with specific features to support people’s participation, e.g., mechanisms for structured pros and cons arguments. Although literature is vast in discussing online forums usage, requirements for the design of platforms for academic deliberation has not been addressed in the same proportion. In this paper, we analyze three online discussion platforms for deliberation. We conduct a structural analysis regarding their interaction concepts and, based on activities of graduate students attending a Human-Computer Interaction discipline, this study conducts a usage analysis of the platforms. Results reveal the level of participants’ engagement in academic discussions and the effects on their learning perception. Moreover, results expose the impact of software design choices in the deliberation outcome. © Springer International Publishing AG 2017.},
author_keywords={Academic deliberation; Collaboration; Considerate; Debate hub; HCI; Interaction design; Social computing; Trello},
sponsors={},
publisher={Springer Verlag},
document_type={Conference Paper},
source={Scopus},
}
Asynchronous online discussions are relevant for supporting and promoting debates among people. Nevertheless, achieving beneficial discussion requires adequate software applications with specific features to support people’s participation, e.g., mechanisms for structured pros and cons arguments. Although literature is vast in discussing online forums usage, requirements for the design of platforms for academic deliberation has not been addressed in the same proportion. In this paper, we analyze three online discussion platforms for deliberation. We conduct a structural analysis regarding their interaction concepts and, based on activities of graduate students attending a Human-Computer Interaction discipline, this study conducts a usage analysis of the platforms. Results reveal the level of participants’ engagement in academic discussions and the effects on their learning perception. Moreover, results expose the impact of software design choices in the deliberation outcome.
|
Destro, Juliana Medeiros;
Reis, Julio Cesar dos;
Brito, Ariadne Maria;
Carvalho, Rizzoni;
Ricarte, Ivan Luiz Marques
Influence of Semantic Similarity Measures on Ontology Cross-Language Mappings (conference)
Proceedings of the Symposium on Applied Computing,
ACM,
2017.
(
Abstract |
Links |
BibTeX |
Tags:
biomedical ontologies,
cross-language matching,
semantic similarity,
ontologies,
ontology mapping
)
@inproceedings{destro2017i,
author = {Destro, Juliana Medeiros and Reis, Julio Cesar dos and Brito, Ariadne Maria and Carvalho, Rizzoni and Ricarte, Ivan Luiz Marques},
title = {Influence of Semantic Similarity Measures on Ontology Cross-Language Mappings},
year = {2017},
isbn = {9781450344869},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3019612.3019836},
doi = {10.1145/3019612.3019836},
booktitle = {Proceedings of the Symposium on Applied Computing},
pages = {323–329},
numpages = {7},
keywords = {biomedical ontologies, cross-language matching, semantic similarity, ontologies, ontology mapping},
location = {Marrakech, Morocco},
series = {SAC ’17}
}
Cross-language mappings establish relations between ontology concepts defined in different languages. Similarity measures calculate the degree of relatedness between concepts to support matching between two distinct ontologies. Cross-language matching remains an open research issue due to the difficulties in taking advantage of similarity computation. This article investigates the effects of different semantic similarity measures on the identification of cross-language mappings. We carry out experiments exploring real-world biomedical ontology mappings to comprehend the behaviour of computed similarity values. The obtained results indicate the relevance of the domain-related background knowledge in the effectiveness of semantic measures for ontology cross-language alignment.
|
2016 |
Pantoja, Fagner L.;
Cavoto, Patrícia;
Reis, Julio Cesar dos;
Santanchè, André
Generating Knowledge Networks from Phenotypic Descriptions (conference)
Proceedings of the 12th International Conference on eScience,
Baltimore, MD, USA,
2016.
(
Links |
BibTeX |
Tags:
Curation
)
@conference{Pantoja,
address = {Baltimore, MD, USA},
author = {Fagner L. Pantoja and Patrícia Cavoto and Julio Cesar dos Reis and André Santanchè},
booktitle = {Proceedings of the 12th International Conference on eScience},
date = {2016-10-24},
keyword = {Curation},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/12/1095anav.pdf},
organization = {IEEE,},
title = {Generating Knowledge Networks from Phenotypic Descriptions},
year = {2016}
}
|
Carvalho, Lucas A. M. C.;
Belhajjame, Khalid;
Medeiros, Claudia Bauzer
Converting Scripts into Reproducible Workflow Research Objects (conference)
Proceedings of the 2016 IEEE 12th International Conference on eScience,
IEEE,
Baltimore, MD, USA, October 23-27,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Methodology, Provenance Information, Scientific Workflows
)
@conference{Carvalho2016Converting,
abstract = {Scientific discovery and analysis are increasingly computational and data-driven. While scripting languages, such as Python, R and Perl, are the means of choice of the majority of scientists to encode and run their data analysis, scripts are generally not amenable to reuse or reproducibility. Scripts do rarely get reused or even shared with third party scientists. We argue in this paper that the reproducibility of scripts can be promoted by converting them into workflow research objects. A workflow research object encodes a script into a production (executable) workflow that is accompanied by annotations, example datasets and provenance traces of their execution, thereby allowing third party users to understand the data analysis encoded by the original script, run the associated workflow using the same or different dataset, or even repurpose it for a different analysis. To this end, we present a methodology for converting scripts into workflow research objects in a principled manner, guided by requirements that we elicited for this purpose. The methodology exploits tools and standards that have been developed by the community, in particular YesWorkflow, Research Objects and the W3C PROV. It is showcased using a real world use case from the field of Molecular Dynamics.},
address = {Baltimore, MD, USA, October 23-27},
author = {Lucas A. M. C. Carvalho and Khalid Belhajjame and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 2016 IEEE 12th International Conference on eScience},
date = {2016-10-23},
keyword = {Methodology, Provenance Information, Scientific Workflows},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/08/converting-scripts-reproducible-camera-ready.pdf},
publisher = {IEEE},
title = {Converting Scripts into Reproducible Workflow Research Objects},
year = {2016}
}
Scientific discovery and analysis are increasingly computational and data-driven. While scripting languages, such as Python, R and Perl, are the means of choice of the majority of scientists to encode and run their data analysis, scripts are generally not amenable to reuse or reproducibility. Scripts do rarely get reused or even shared with third party scientists. We argue in this paper that the reproducibility of scripts can be promoted by converting them into workflow research objects. A workflow research object encodes a script into a production (executable) workflow that is accompanied by annotations, example datasets and provenance traces of their execution, thereby allowing third party users to understand the data analysis encoded by the original script, run the associated workflow using the same or different dataset, or even repurpose it for a different analysis. To this end, we present a methodology for converting scripts into workflow research objects in a principled manner, guided by requirements that we elicited for this purpose. The methodology exploits tools and standards that have been developed by the community, in particular YesWorkflow, Research Objects and the W3C PROV. It is showcased using a real world use case from the field of Molecular Dynamics.
|
Saraiva, Márcio de Carvalho;
Medeiros, Claudia Bauzer
Use of graphs and taxonomic classifications to analyze content relationships among courseware (conference)
Proceedings of the 31st Brazilian Symposium on Databases,
Salvador, Bahia, Brazil, October 4-7,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
content analysis, educational material, graph database, multiple relationships
)
@conference{Saraiva2016Short,
abstract = {The search for educational content in courseware repositories is laborious and time consuming. There is an abundance of such repositories, and research efforts to facilitate search, but access is guided by keywords and/or terms selected by courseware authors, thus lacking flexibility. The goal of this project is to design and develop a suite of tools to assist users to find, analyze and select pieces of educational content that are relevant to their learning goals. Contributions will be both at the algorithm and software design level, and at the user (application) level.},
address = {Salvador, Bahia, Brazil, October 4-7},
author = {Márcio de Carvalho Saraiva and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 31st Brazilian Symposium on Databases},
date = {2016-10-04},
issn = {2316-5170},
keyword = {content analysis, educational material, graph database, multiple relationships},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/01/marciosaraiva-sbbd2016.pdf},
organization = {Sociedade Brasileira de Computação},
pages = {265-270},
title = {Use of graphs and taxonomic classifications to analyze content relationships among courseware},
year = {2016}
}
The search for educational content in courseware repositories is laborious and time consuming. There is an abundance of such repositories, and research efforts to facilitate search, but access is guided by keywords and/or terms selected by courseware authors, thus lacking flexibility. The goal of this project is to design and develop a suite of tools to assist users to find, analyze and select pieces of educational content that are relevant to their learning goals. Contributions will be both at the algorithm and software design level, and at the user (application) level.
|
Carvalho, Lucas A. M. C.;
Medeiros, Claudia Bauzer
Provenance-Based Infrastructure to Support Reuse of Computational Experiments (conference)
Proceedings of the Satellite Events of the 31st Brazilian Symposium on Databases (Thesis and Dissertations Workshop),
Salvador, Bahia, Brazil, October 4-7,
978-85-7669-343-7,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Provenance Information, Scientific Workflows, Semantic Annotation, Workflow Retrieval
)
@conference{Carvalho2016Wtd,
abstract = {One recurrent problem in multidisciplinary research is finding reusable objects (e.g., scripts, code, documents, workflows) that can be used across disciplines to enhance collaboration.
This paper presents our ongoing work taking advantage of provenance information, combined with scientific workflows, to help find such objects. We also present challenges posed by provenance-based retrieval, which we propose as a solution for transdisciplinary scientific collaboration via reuse.
Our case study in molecular dynamics experiments is part of a larger multi-scale experimental scenario that requires cooperation involving scientists from different disciplines.},
address = {Salvador, Bahia, Brazil, October 4-7},
author = {Lucas A. M. C. Carvalho and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the Satellite Events of the 31st Brazilian Symposium on Databases (Thesis and Dissertations Workshop)},
date = {2016-10-04},
isbn = {978-85-7669-343-7},
keyword = {Provenance Information, Scientific Workflows, Semantic Annotation, Workflow Retrieval},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/01/wtdbd2016-camera-ready.pdf},
organization = {Sociedade Brasileira de Computação},
pages = {74-81},
title = {Provenance-Based Infrastructure to Support Reuse of Computational Experiments},
year = {2016}
}
One recurrent problem in multidisciplinary research is finding reusable objects (e.g., scripts, code, documents, workflows) that can be used across disciplines to enhance collaboration. This paper presents our ongoing work taking advantage of provenance information, combined with scientific workflows, to help find such objects. We also present challenges posed by provenance-based retrieval, which we propose as a solution for transdisciplinary scientific collaboration via reuse. Our case study in molecular dynamics experiments is part of a larger multi-scale experimental scenario that requires cooperation involving scientists from different disciplines.
|
Pantoja, Fagner Leal
Generating Knowledge Networks from Phenotypic Descriptions (mastersthesis)
University of Campinas,
mastersthesis,
2016.
(
Links |
BibTeX |
Tags:
Curation
)
@mastersthesis{Pantoja2016,
author = {Fagner Leal Pantoja},
date = {2016-08-05},
keyword = {Curation},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/12/fagner-dissertacao-vfinal.pdf},
school = {University of Campinas},
title = {Generating Knowledge Networks from Phenotypic Descriptions},
year = {2016}
}
|
Borges, Luana Loubet
BioGraph: Linking Biological Bases Across Organisms (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
database, Ontologies (Information retrieval)
)
@mastersthesis{Borges2016,
abstract = {Representing data as networks have been shown to be a powerful approach for data analysis in biodiversity, e.g., interactions among organisms; relations among genes and phenotypes etc. In this context, databases and repositories following a graph model (e.g., RDF) have been increasingly used to interconnect information and to support network-driven analysis. Usually, this kind of analysis requires gathering together and linking data from several distinct and heterogeneous sources. In this work, we investigate this challenge in the context of biological bases focusing on the characterization of living organisms, especially their phenotypes and diseases. It includes the rich diversity of Model Organism Databases (MODs) -- repositories specialized in a particular taxon -- widely used in the biological and medical studies. We exploit a lightweight integration approach, inspired in the Linked Open Data initiative, mapping several biological bases in a unified graph database -- our BioGraph -- and linking key elements to offer an interconnected view over the data. We present here practical experiments to validate the proposal and to show how BioGraph can contribute for biological data analysis in a network perspective.},
author = {Luana Loubet Borges},
date = {2016-08-05},
keyword = {database, Ontologies (Information retrieval)},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/10/dissertacao_versao_final_Luana_Loubet.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {BioGraph: Linking Biological Bases Across Organisms},
year = {2016}
}
Representing data as networks have been shown to be a powerful approach for data analysis in biodiversity, e.g., interactions among organisms; relations among genes and phenotypes etc. In this context, databases and repositories following a graph model (e.g., RDF) have been increasingly used to interconnect information and to support network-driven analysis. Usually, this kind of analysis requires gathering together and linking data from several distinct and heterogeneous sources. In this work, we investigate this challenge in the context of biological bases focusing on the characterization of living organisms, especially their phenotypes and diseases. It includes the rich diversity of Model Organism Databases (MODs) -- repositories specialized in a particular taxon -- widely used in the biological and medical studies. We exploit a lightweight integration approach, inspired in the Linked Open Data initiative, mapping several biological bases in a unified graph database -- our BioGraph -- and linking key elements to offer an interconnected view over the data. We present here practical experiments to validate the proposal and to show how BioGraph can contribute for biological data analysis in a network perspective.
|
Carvalho, Lucas A. M. C.;
Silveira, Rodrigo L.;
Pereira, Caroline S.;
Skaf, Munir S.;
Medeiros, Claudia Bauzer
Provenance-Based Retrieval: Fostering Reuse and Reproducibility Across Scientific Disciplines (conference)
Provenance and Annotation of Data and Processes (Proceedings of 6th International Provenance and Annotation Workshop - IPAW 2016),
Springer International Publishing,
McLean, Virginia, U.S.A.,
978-3319405926,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Provenance Information, Scientific Workflows, Semantic Annotation, Workflow Retrieval
)
@conference{Carvalho2016,
abstract = {When computational researchers from several domains cooperate, one recurrent problem is finding tools, methods and approaches that can be used across disciplines, to enhance collaboration through reuse. The
paper presents our ongoing work to meet the challenges posed by provenance-based retrieval, proposed as
a solution for transdisciplinary scientific collaboration via reuse of scientific workflows. Our work is based
upon a case study in molecular dynamics experiments, as part of a larger multi-scale experimental
scenario.},
address = {McLean, Virginia, U.S.A.},
author = {Lucas A. M. C. Carvalho and Rodrigo L. Silveira and Caroline S. Pereira and Munir S. Skaf and Claudia Bauzer Medeiros},
booktitle = {Provenance and Annotation of Data and Processes (Proceedings of 6th International Provenance and Annotation Workshop - IPAW 2016)},
chapter = {17},
date = {2016-06-06},
editor = {Marta Mattoso and Boris Glavic},
isbn = {978-3319405926},
keyword = {Provenance Information, Scientific Workflows, Semantic Annotation, Workflow Retrieval},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/05/ipaw2016-poster-cameraready.pdf},
pages = {1-4},
publisher = {Springer International Publishing},
series = {LNCS 9672},
title = {Provenance-Based Retrieval: Fostering Reuse and Reproducibility Across Scientific Disciplines},
year = {2016}
}
When computational researchers from several domains cooperate, one recurrent problem is finding tools, methods and approaches that can be used across disciplines, to enhance collaboration through reuse. The paper presents our ongoing work to meet the challenges posed by provenance-based retrieval, proposed as a solution for transdisciplinary scientific collaboration via reuse of scientific workflows. Our work is based upon a case study in molecular dynamics experiments, as part of a larger multi-scale experimental scenario.
|
Mota, Matheus Silva;
Reis, Julio Cesar dos;
Goutte, Sandra;
Santanchè, André
Multiscaling a Graph-based Dataspace [accepted] (article)
Journal of Information and Data Management - JIDM,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Journal Paper
)
@article{linkedscalesjidm,
abstract = {Biologists increasingly need a unified view to understand and discover relationships among data elements scattered along data sources with different levels of heterogeneity. Existing approaches usually adopt ad-hoc heavyweight integration strategies, requiring a costly upfront effort involving a monolithic chain of steps to handle specific formats/schemas, with low or no reuse. This article proposes the conception of a multiscale-based dataspace architecture, called LinkedScales. It departs from the notion of integration-scales within a dataspace, and defines a systematic and progressive integration process via graph-based transformations over a graph database. LinkedScales aims to provide a homogeneous view of heterogeneous sources, allowing systems to reach and produce different integration levels on demand, going from raw representations (lower scales) towards ontology-like structures (higher scales). We describe inner aspects of the architecture and its transformation process by introducing the Multiscale Transformation Graph, which tracks the transformation process among scales.
Although the proposed framework can be applied to several scenarios, this work focuses on the biology domain addressing the organism-centric analysis scenario. Obtained results reveal the viability of the framework and its implementation to integrate relevant resources for the organism-centric scenario.},
author = {Matheus Silva Mota and Julio Cesar dos Reis and Sandra Goutte and André Santanchè},
date = {2016-05-01},
journal = {Journal of Information and Data Management - JIDM},
keyword = {Journal Paper},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/10/multiscaling-graph-based.pdf},
pages = {16},
title = {Multiscaling a Graph-based Dataspace [accepted]},
year = {2016}
}
Biologists increasingly need a unified view to understand and discover relationships among data elements scattered along data sources with different levels of heterogeneity. Existing approaches usually adopt ad-hoc heavyweight integration strategies, requiring a costly upfront effort involving a monolithic chain of steps to handle specific formats/schemas, with low or no reuse. This article proposes the conception of a multiscale-based dataspace architecture, called LinkedScales. It departs from the notion of integration-scales within a dataspace, and defines a systematic and progressive integration process via graph-based transformations over a graph database. LinkedScales aims to provide a homogeneous view of heterogeneous sources, allowing systems to reach and produce different integration levels on demand, going from raw representations (lower scales) towards ontology-like structures (higher scales). We describe inner aspects of the architecture and its transformation process by introducing the Multiscale Transformation Graph, which tracks the transformation process among scales. Although the proposed framework can be applied to several scenarios, this work focuses on the biology domain addressing the organism-centric analysis scenario. Obtained results reveal the viability of the framework and its implementation to integrate relevant resources for the organism-centric scenario.
|
Gonçalves, Fabrício Matheus
Design de interação em sistemas computacionais para apoio à aprendizagem ativa: uma abordagem sistêmica (mastersthesis)
University of Campinas - Unicamp,
mastersthesis,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Active learning, Agile software development, Autonomy, Interação humano-computador, Peer instruction, Peer teaching Self-determination (Psychology), Semiótica organizacional
)
@mastersthesis{goncalves2016design,
abstract = {In formal learning contexts, there is a diversity of interests and skills than need to have space in the interaction between those involved in the production and sharing of knowledge. Active Learning refers to a set of strategies through which the participation of key actors in the educational environment goes beyond the unidirectional expository/receptive model of knowledge, involving activities where discussion and collaboration with others have an important role in reflection and construction of meaning. Computer systems that support the learning processes have not been designed for the diversity of skills and demands present in such environments. In this thesis we argue that if we want to develop solutions that make sense to stakeholders and suited to the complexity that characterizes Active Learning scenarios, we need to include them in the design cycle. In this paper we adopt a perspective based on Organizational Semiotics for analysis of Active Learning scenarios and propose a systemic vision for the interaction design in such environments, including motivational aspects. Work results include: a cyclical design process than we call "on the fly design", and a system for collaborative authoring and review of Active Learning activities. This process has been experimented on incremental construction of the system which, in turn, has been experimented in a real context of higher education. The system was evaluated iteratively, based on feedback from stakeholders in the situated context, feeding back the characterization of the process. The process was effective in building an emerging system for supporting the collaboration and participation of those involved in the experimented Active Learning scenarios.},
author = {Fabrício Matheus Gonçalves},
date = {2016-02-04},
keyword = {Active learning, Agile software development, Autonomy, Interação humano-computador, Peer instruction, Peer teaching Self-determination (Psychology), Semiótica organizacional},
link = {http://www.reposip.unicamp.br/xmlui/bitstream/handle/REPOSIP/305633/Goncalves%2c%20Fabricio%20Matheus_M.pd?sequence=1&isAllowed=y},
school = {University of Campinas - Unicamp},
title = {Design de interação em sistemas computacionais para apoio à aprendizagem ativa: uma abordagem sistêmica},
year = {2016}
}
In formal learning contexts, there is a diversity of interests and skills than need to have space in the interaction between those involved in the production and sharing of knowledge. Active Learning refers to a set of strategies through which the participation of key actors in the educational environment goes beyond the unidirectional expository/receptive model of knowledge, involving activities where discussion and collaboration with others have an important role in reflection and construction of meaning. Computer systems that support the learning processes have not been designed for the diversity of skills and demands present in such environments. In this thesis we argue that if we want to develop solutions that make sense to stakeholders and suited to the complexity that characterizes Active Learning scenarios, we need to include them in the design cycle. In this paper we adopt a perspective based on Organizational Semiotics for analysis of Active Learning scenarios and propose a systemic vision for the interaction design in such environments, including motivational aspects. Work results include: a cyclical design process than we call 'on the fly design', and a system for collaborative authoring and review of Active Learning activities. This process has been experimented on incremental construction of the system which, in turn, has been experimented in a real context of higher education. The system was evaluated iteratively, based on feedback from stakeholders in the situated context, feeding back the characterization of the process. The process was effective in building an emerging system for supporting the collaboration and participation of those involved in the experimented Active Learning scenarios.
|
Cavoto, Patrícia
ReGraph : Bridging Relational and Graph Databases (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Databases, Ontologies (Information retrieval), Software development - Databases
)
@mastersthesis{Cavoto2016,
abstract = {Networks are everywhere. From social interactions: family, friends, hobbies; passing through computer science: computers on the Internet; to nature: as food chains. Recent research shows the importance of links and network analysis to discover knowledge in existing data. Moreover, the Linked Open Data and Semantic Web efforts empowered the fast growth of open knowledge repositories on the web, mainly in the RDF (Resource Description Framework) graph model. However, a lot of data are stored in relational databases, whose model has not been designed to address queries with many transitive relations. On the other hand, the flexible graph model is suitable for data analysis focusing on links, their transitivity and the network topology, e.g., a connected component analysis. Therefore, our research is inspired by the data OLAP (OnLine Analytical Processing) approach of creating a special database designed for data analysis, a network-driven data analysis, using graph databases. In this dissertation, we present ReGraph, a framework to map data from a relational to a graph database, managing a dynamic coexistence and evolution of both, not supported by related work. ReGraph has minimum impact on the existing infrastructure, providing a flexible and tailored graph model for each relational schema. It uses an initial ETL (Extract, Transform and Load) process to replicate the existing data in the graph database. A scheduled service is responsible for automatically reflecting changes in the relational data into the graph, keeping both synchronized. ReGraph also provides an annotation functionality to materialize inferences and to support data enrichment, which enables linking the local database to global knowledge graphs on the Web. We have used the ReGraph framework to generate FishGraph, a graph database created from the FishBase relational database. Using FishGraph we developed experiments to analyze the connections among thousands of identification keys and species, and we have linked local data to DBpedia, creating annotations over the local graph and providing new knowledge from existing data.},
author = {Patrícia Cavoto},
date = {2016-02-04},
keyword = {Databases, Ontologies (Information retrieval), Software development - Databases},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/05/Cavoto2016.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {ReGraph : Bridging Relational and Graph Databases},
year = {2016}
}
Networks are everywhere. From social interactions: family, friends, hobbies; passing through computer science: computers on the Internet; to nature: as food chains. Recent research shows the importance of links and network analysis to discover knowledge in existing data. Moreover, the Linked Open Data and Semantic Web efforts empowered the fast growth of open knowledge repositories on the web, mainly in the RDF (Resource Description Framework) graph model. However, a lot of data are stored in relational databases, whose model has not been designed to address queries with many transitive relations. On the other hand, the flexible graph model is suitable for data analysis focusing on links, their transitivity and the network topology, e.g., a connected component analysis. Therefore, our research is inspired by the data OLAP (OnLine Analytical Processing) approach of creating a special database designed for data analysis, a network-driven data analysis, using graph databases. In this dissertation, we present ReGraph, a framework to map data from a relational to a graph database, managing a dynamic coexistence and evolution of both, not supported by related work. ReGraph has minimum impact on the existing infrastructure, providing a flexible and tailored graph model for each relational schema. It uses an initial ETL (Extract, Transform and Load) process to replicate the existing data in the graph database. A scheduled service is responsible for automatically reflecting changes in the relational data into the graph, keeping both synchronized. ReGraph also provides an annotation functionality to materialize inferences and to support data enrichment, which enables linking the local database to global knowledge graphs on the Web. We have used the ReGraph framework to generate FishGraph, a graph database created from the FishBase relational database. Using FishGraph we developed experiments to analyze the connections among thousands of identification keys and species, and we have linked local data to DBpedia, creating annotations over the local graph and providing new knowledge from existing data.
|
Daltio, Jaudete;
Medeiros, Claudia Bauzer
A View Handler for Semantic Graphs (conference)
Proceedings 10th IEEE ICSC,
Los Angeles,
2016.
(
Abstract |
Links |
BibTeX |
Tags:
Graph Databases
)
@conference{Daltio2016,
abstract = {Scientific data often come from networks with complex relationships between their entities and can be properly modeled as semantic graphs. However, once designed, there is no simple way to cross through different designs in graph databases. The goal of this research is to specify and implement a framework to overcome these limitations, allowing users to build and explore arbitrary perspectives in graphs. The framework uses the concept of views to represent a perspective. The main contribution is to help scientists run models and analyze network (graph) data according to their specific design needs. The framework is under implementation and validation using a case study on water resource data.},
address = {Los Angeles},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
date = {2016-02-03},
editor = {10th IEEE International Conference on Semantic Computing},
howpublished = {Proceedings 10th IEEE ICSC},
keyword = {Graph Databases},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/PID4045125.pdf},
pages = {1-5},
publisher = {Proceedings 10th IEEE ICSC},
title = {A View Handler for Semantic Graphs},
year = {2016}
}
Scientific data often come from networks with complex relationships between their entities and can be properly modeled as semantic graphs. However, once designed, there is no simple way to cross through different designs in graph databases. The goal of this research is to specify and implement a framework to overcome these limitations, allowing users to build and explore arbitrary perspectives in graphs. The framework uses the concept of views to represent a perspective. The main contribution is to help scientists run models and analyze network (graph) data according to their specific design needs. The framework is under implementation and validation using a case study on water resource data.
|
2015 |
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Hydrograph: Exploring Geographic Data in Graph Databases (conference)
XVI Brazilian Symposium on Geoinformatics (GEOINFO),
Campos do Jordao,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Graph Databases
)
@conference{Daltio2015,
abstract = {Water becomes, every day, more scarce. Reliable information about volume and quality in each watershed is important to management and proper planning of their use. Data-intensive science is being increasingly needed in this context. Associated analysis processes require handling the drainage network that represents a watershed. This paper presents an ongoing work that explores geographic watershed data using graph databases – a scalable and flexible kind of NoSQL databases. The Brazilian Watershed database is used as a case study. The mapping between geographic and graph models is based on the natural network that emerges from the topological relationships among geographic entities.},
address = {Campos do Jordao},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
booktitle = {XVI Brazilian Symposium on Geoinformatics (GEOINFO)},
date = {2015-11-30},
keyword = {Graph Databases},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/daltio_medeiros_geoinfo.pdf},
pages = {44-55},
title = {Hydrograph: Exploring Geographic Data in Graph Databases},
year = {2015}
}
Water becomes, every day, more scarce. Reliable information about volume and quality in each watershed is important to management and proper planning of their use. Data-intensive science is being increasingly needed in this context. Associated analysis processes require handling the drainage network that represents a watershed. This paper presents an ongoing work that explores geographic watershed data using graph databases – a scalable and flexible kind of NoSQL databases. The Brazilian Watershed database is used as a case study. The mapping between geographic and graph models is based on the natural network that emerges from the topological relationships among geographic entities.
|
Junior, Luiz Celso Gomes
Querying and Managing Complex Networks (phdthesis)
Universidade Estadual de Campinas - UNICAMP,
phdthesis,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Complex Networks, Databases
)
@phdthesis{Gomes-Jr2015c,
abstract = {Understanding and quantifying the emergent properties of natural and man-made networks such as food webs, social interactions, and transportation infrastructures is a challenging task. The complex networks field was developed to encompass measurements, algorithms, and techniques to tackle such topics. Although complex networks research has been successfully applied to several areas of human activity, there is still a lack of common infrastructures for routine tasks, especially those related to data management. On the other hand, the databases field has focused on mastering data management issues since its beginnings, several decades ago. Database systems, however, offer limited network analysis capabilities. To enable a better support for complex network analysis tasks, a database system must offer adequate querying and data management capabilities. This thesis advocates for a tighter integration between the areas and presents our efforts towards this goal. Here we describe the Complex Data Management System (CDMS), which enables explorative querying of complex networks through a declarative query language. Query results are ranked based on network measurements assessed at query time. To support query processing, we introduce the Beta-algebra, which offers an operator capable of representing diverse measurements typical of complex network analysis. The algebra offers opportunities for transparent query optimization through query rewritings, proposed and discussed here. We also introduce the mapper mechanism for relationship management, which is integrated in the query language. The flexible query language and data management mechanisms are useful in scenarios other than complex network analysis. We demonstrate the use of the CDMS in applications such as institutional data integration, information retrieval, classification and recommendation. All aspects of the proposal are implemented and have been tested with real and synthetic data.},
author = {Luiz Celso Gomes Junior},
date = {2015-10-26},
keyword = {Complex Networks, Databases},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/Celso-Jr-Doutorado.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Querying and Managing Complex Networks},
year = {2015}
}
Understanding and quantifying the emergent properties of natural and man-made networks such as food webs, social interactions, and transportation infrastructures is a challenging task. The complex networks field was developed to encompass measurements, algorithms, and techniques to tackle such topics. Although complex networks research has been successfully applied to several areas of human activity, there is still a lack of common infrastructures for routine tasks, especially those related to data management. On the other hand, the databases field has focused on mastering data management issues since its beginnings, several decades ago. Database systems, however, offer limited network analysis capabilities. To enable a better support for complex network analysis tasks, a database system must offer adequate querying and data management capabilities. This thesis advocates for a tighter integration between the areas and presents our efforts towards this goal. Here we describe the Complex Data Management System (CDMS), which enables explorative querying of complex networks through a declarative query language. Query results are ranked based on network measurements assessed at query time. To support query processing, we introduce the Beta-algebra, which offers an operator capable of representing diverse measurements typical of complex network analysis. The algebra offers opportunities for transparent query optimization through query rewritings, proposed and discussed here. We also introduce the mapper mechanism for relationship management, which is integrated in the query language. The flexible query language and data management mechanisms are useful in scenarios other than complex network analysis. We demonstrate the use of the CDMS in applications such as institutional data integration, information retrieval, classification and recommendation. All aspects of the proposal are implemented and have been tested with real and synthetic data.
|
Pantoja, Fagner L.;
Reis, Julio Cesar Dos;
Santanchè, André
Semantic Interpretation of Biological Identification Keys (conference)
Proceedings of the Brazilian Symposium on Databases (SBBD), 2015,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Identification Keys, NER, Semantic Interpretation
)
@conference{Pantoja2015,
abstract = {In biological data, Identification Keys (IKs) are central artifacts used by biologists to identify the taxonomic group of an observed specimen, such as family, order, species, etc. Despite their relevance, IKs are usually defined in a semistructured textual format, which does not favor easily retrieval and deep analysis over their data. This article aims to present a method to formally structure and extract semantic facts from IKs relying on graphs and domain ontologies. The approach explores classical extraction and matching procedures combined with the specific characteristics of IKs. Initial experiments reveal the feasibility of the approach.},
author = {Fagner L. Pantoja and Julio Cesar Dos Reis and André Santanchè},
booktitle = {Proceedings of the Brazilian Symposium on Databases (SBBD), 2015},
date = {2015-10-13},
keyword = {Identification Keys, NER, Semantic Interpretation},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/article-2015-08-20comments2.pdf},
title = {Semantic Interpretation of Biological Identification Keys},
year = {2015}
}
In biological data, Identification Keys (IKs) are central artifacts used by biologists to identify the taxonomic group of an observed specimen, such as family, order, species, etc. Despite their relevance, IKs are usually defined in a semistructured textual format, which does not favor easily retrieval and deep analysis over their data. This article aims to present a method to formally structure and extract semantic facts from IKs relying on graphs and domain ontologies. The approach explores classical extraction and matching procedures combined with the specific characteristics of IKs. Initial experiments reveal the feasibility of the approach.
|
Cavoto, Patrícia;
Santanchè, André
ReGraph: Bridging Relational and Graph Databases (conference)
Proceedings of Satellite Events of the 30th Brazilian Symposium on Databases 2015 (SBBD 2015),
Sociedade Brasileira de Computação (SBC),
Petrópolis, RJ,
2316-5170,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
graph database, ReGraph framework, relational database
)
@conference{Cavoto2015c,
abstract = {In this paper, we present ReGraph, a framework to map data from a relational to a graph database, managing a dynamic coexistence and evolution of both, not supported by related work. ReGraph has minimal impact in the existing infrastructure, providing a flexible and tailored graph model for each relational schema. It uses an initial ETL (Extract, Transform and Load) process to replicate the existing data in the graph database. A scheduled service is responsible for reflecting changes in the relational data into the graph, keeping both synchronized. ReGraph also provides an annotation functionality that allows users to add new information in the mapped graph, providing the materialization of inferences and data
enrichment.},
address = {Petrópolis, RJ},
author = {Patrícia Cavoto and André Santanchè},
booktitle = {Proceedings of Satellite Events of the 30th Brazilian Symposium on Databases 2015 (SBBD 2015)},
date = {2015-10-13},
isbn = {2316-5170},
keyword = {graph database, ReGraph framework, relational database},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/Cavoto2015c.pdf},
note = {Demo-paper},
pages = {179-184},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {ReGraph: Bridging Relational and Graph Databases},
year = {2015}
}
In this paper, we present ReGraph, a framework to map data from a relational to a graph database, managing a dynamic coexistence and evolution of both, not supported by related work. ReGraph has minimal impact in the existing infrastructure, providing a flexible and tailored graph model for each relational schema. It uses an initial ETL (Extract, Transform and Load) process to replicate the existing data in the graph database. A scheduled service is responsible for reflecting changes in the relational data into the graph, keeping both synchronized. ReGraph also provides an annotation functionality that allows users to add new information in the mapped graph, providing the materialization of inferences and data enrichment.
|
Mota, Matheus Silva;
Reis, Julio Cesar dos;
Goutte, Sandra;
Santanchè, André
Multiscale Dataspace for Organism-centric Analysis (conference)
Proceedings of the Brazilian Symposium on Databases (SBBD),
2015.
(
Abstract |
Links |
BibTeX |
Tags:
linkedscales
)
@conference{Mota2015b,
abstract = {Biologists increasingly need a unified view to understand and discover relationships among data elements scattered along data sources with different levels of heterogeneity. Existing approaches usually adopt ad-hoc heavyweight integration strategies, requiring a costly upfront effort involving a monolithic chain of steps to handle specific formats/schemas, with low or no reuse. This article proposes an original framework based on scales aligned with the dataspaces on demand integration principle. Scales systematize and encapsulate integration in discrete steps, fulfilling the dynamicity of the process through reuse of previous scales and localized customization. Although the proposed framework can be extended to several scenarios, this work focuses on the biology domain addressing the organism-centric analysis scenario.},
author = {Matheus Silva Mota and Julio Cesar dos Reis and Sandra Goutte and André Santanchè},
booktitle = {Proceedings of the Brazilian Symposium on Databases (SBBD)},
date = {2015-10-01},
keyword = {linkedscales},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/paper1.pdf},
title = {Multiscale Dataspace for Organism-centric Analysis},
year = {2015}
}
Biologists increasingly need a unified view to understand and discover relationships among data elements scattered along data sources with different levels of heterogeneity. Existing approaches usually adopt ad-hoc heavyweight integration strategies, requiring a costly upfront effort involving a monolithic chain of steps to handle specific formats/schemas, with low or no reuse. This article proposes an original framework based on scales aligned with the dataspaces on demand integration principle. Scales systematize and encapsulate integration in discrete steps, fulfilling the dynamicity of the process through reuse of previous scales and localized customization. Although the proposed framework can be extended to several scenarios, this work focuses on the biology domain addressing the organism-centric analysis scenario.
|
Borges, Luana Loubet;
Santanchè, André
Unificando a Comparação e Busca de Fenótipos em Model Organism Databases (conference)
Proceedings of 7th Brazilian Conference on Ontological Research (ONTOBRAS 2015),
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Borges2015,
abstract = {Model Organism Databases (MODs) são largamente utilizados em pesquisas nas áreas médica e biológica. Como cada MOD é usualmente especializado em um tipo de organismo e.g., peixe-zebra, rato, humano, camundongo torna-se difícil a busca da mesma característica em organismos distintos para fins de correlação e comparação. Este trabalho apresenta um framework chamado Unified MOD Discovery Engine, cujo objetivo é permitir a correlação e busca de dados de vários MODs, a partir da unificação da sua representação dos dados. Este artigo apresenta o primeiro passo nesta direção, em que foram analisados e comparados os modelos de dados de dois MODs, o ZFIN (peixa-zebra) e MGI (camundongo), como base para a concepção de um modelo unificado. Tal modelo é a base de um grafo interligado, que permitirá ao usuário fazer buscas e comparações de forma unificada.},
author = {Luana Loubet Borges and André Santanchè},
booktitle = {Proceedings of 7th Brazilian Conference on Ontological Research (ONTOBRAS 2015)},
date = {2015-09-10},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/ontobras-luana-santanche-2015.pdf},
pages = {1-6},
title = {Unificando a Comparação e Busca de Fenótipos em Model Organism Databases},
year = {2015}
}
Model Organism Databases (MODs) são largamente utilizados em pesquisas nas áreas médica e biológica. Como cada MOD é usualmente especializado em um tipo de organismo e.g., peixe-zebra, rato, humano, camundongo torna-se difícil a busca da mesma característica em organismos distintos para fins de correlação e comparação. Este trabalho apresenta um framework chamado Unified MOD Discovery Engine, cujo objetivo é permitir a correlação e busca de dados de vários MODs, a partir da unificação da sua representação dos dados. Este artigo apresenta o primeiro passo nesta direção, em que foram analisados e comparados os modelos de dados de dois MODs, o ZFIN (peixa-zebra) e MGI (camundongo), como base para a concepção de um modelo unificado. Tal modelo é a base de um grafo interligado, que permitirá ao usuário fazer buscas e comparações de forma unificada.
|
Mota, Matheus Silva;
Santanchè, André
Conceiving a Multiscale Dataspace for Data Analysis (conference)
Proceedings of the Brazilian Seminar on Ontologies (ONTOBRAS 2015),
CEUR,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
dataspace, multscale
)
@conference{Mota2015,
abstract = {A consequence of the intensive growth of information shared online is the increase of opportunities to link and integrate distinct sources of knowledge. This linking and integration can be hampered by different levels of heterogeneity in the available sources. Existing approaches focusing on heavyweight integration – e.g., schema mapping or ontology alignment – require costly upfront efforts to handle specific formats/schemas. In this scenario, dataspaces emerge as a modern alternative approach to address the integration of heterogeneous sources. The classic heavyweight upfront one-step integration is replaced by an incremental integration, starting from lightweight connections, tightening and improving them when benefits worth such effort. Based on several previous work on data integration for data analysis, this work discusses the conception of a multiscale-based dataspace architecture, called LinkedScales. It departs from the notion of integration-scales within a dataspace, and defines a systematic and progressive integration process via graph-based transformations over a graph database. LinkedScales aims to provide a homogeneous view of heterogeneous sources, allowing systems to reach and produce different integration levels on demand, going from raw representations (lower scales) towards ontology-like structures (higher scales).},
author = {Matheus Silva Mota and André Santanchè},
booktitle = {Proceedings of the Brazilian Seminar on Ontologies (ONTOBRAS 2015)},
date = {2015-09-08},
issn = {16130073},
keyword = {dataspace, multscale},
link = {http://www.ime.usp.br/~ontobras/wp-content/uploads/2015/09/Conceiving-a-Multiscale-Dataspace-for-Data-Analysis.pdf
http://www.lis.ic.unicamp.br/?attachment_id=690
http://ceur-ws.org/Vol-1442/paper_21.pdf},
pages = {12},
publisher = {CEUR},
title = {Conceiving a Multiscale Dataspace for Data Analysis},
volume = {1442},
year = {2015}
}
A consequence of the intensive growth of information shared online is the increase of opportunities to link and integrate distinct sources of knowledge. This linking and integration can be hampered by different levels of heterogeneity in the available sources. Existing approaches focusing on heavyweight integration – e.g., schema mapping or ontology alignment – require costly upfront efforts to handle specific formats/schemas. In this scenario, dataspaces emerge as a modern alternative approach to address the integration of heterogeneous sources. The classic heavyweight upfront one-step integration is replaced by an incremental integration, starting from lightweight connections, tightening and improving them when benefits worth such effort. Based on several previous work on data integration for data analysis, this work discusses the conception of a multiscale-based dataspace architecture, called LinkedScales. It departs from the notion of integration-scales within a dataspace, and defines a systematic and progressive integration process via graph-based transformations over a graph database. LinkedScales aims to provide a homogeneous view of heterogeneous sources, allowing systems to reach and produce different integration levels on demand, going from raw representations (lower scales) towards ontology-like structures (higher scales).
|
Cavoto, Patrícia;
Santanchè, André
Annotation-Based Method for Linking Local and Global Knowledge Graphs (conference)
Proceedings of the Brazilian Seminar on Ontologies (ONTOBRAS 2015),
2015.
(
Abstract |
Links |
BibTeX |
Tags:
annotation-based method, graph database, ontologies, ReGraph framework
)
@conference{Cavoto2015b,
abstract = {In the last years, the use of data available in “global graphs” as Linked Open Data and Ontologies are increasing faster and bringing with them the popularization of the graph structure to represent information networks. One challenge, in this context, is how to link local and global knowledge graphs. This paper presents an approach to address this problem through an annotation-based method to link a local graph database to global graphs. Different from related work, the local graph is not derived from a static dataset, but it is a dynamic graph database evolving along the time, containing connections (annotations) with global graphs that must stay consistent during its evolution. We applied this method over a dataset with more than 44,500 nodes, annotating them with the values found in DBpedia and GeoNames. The proposed method is an extension of our ReGraph framework that bridges relational and graph databases, keeping both integrated, synchronized and in their native representations, with minimal impact in the current infrastructure.},
author = {Patrícia Cavoto and André Santanchè},
booktitle = {Proceedings of the Brazilian Seminar on Ontologies (ONTOBRAS 2015)},
date = {2015-09-08},
issn = {16130073},
keyword = {annotation-based method, graph database, ontologies, ReGraph framework},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/Cavoto2015b.pdf},
note = {Short-paper},
pages = {1-6},
title = {Annotation-Based Method for Linking Local and Global Knowledge Graphs},
volume = {1442 CEUR},
year = {2015}
}
In the last years, the use of data available in “global graphs” as Linked Open Data and Ontologies are increasing faster and bringing with them the popularization of the graph structure to represent information networks. One challenge, in this context, is how to link local and global knowledge graphs. This paper presents an approach to address this problem through an annotation-based method to link a local graph database to global graphs. Different from related work, the local graph is not derived from a static dataset, but it is a dynamic graph database evolving along the time, containing connections (annotations) with global graphs that must stay consistent during its evolution. We applied this method over a dataset with more than 44,500 nodes, annotating them with the values found in DBpedia and GeoNames. The proposed method is an extension of our ReGraph framework that bridges relational and graph databases, keeping both integrated, synchronized and in their native representations, with minimal impact in the current infrastructure.
|
Cavoto, Patrícia;
Cardo, Victor;
Lebbe, Régine Vignes;
Santanchè, André
FishGraph: A Network-Driven Data Analysis (conference)
2015 IEEE 11th International Conference on e-Science (e-Science 2015),
IEEE,
Munich, Germany,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Biodiversity Information Systems, graph database, network topology analysis
)
@conference{Cavoto2015,
abstract = {There are a lot of data about biodiversity stored in different database models and most of them are relational. Recent research shows the importance of links and network analysis to discover knowledge in existing data. However, the relational model was not designed to address problems in which the links between data have the same importance as the data -- a common scenario in the biodiversity area. Moreover, the Linked Data and Semantic Web efforts empowered the fast growth of open knowledge repositories on the web, mainly in the RDF (Resource Description Framework) graph model. The flexible graph database model contrasts with the rigid relational model and is also suitable for data analysis focusing on links and the network topology, e.g., a connected component analysis. Our research is inspired by the data OLAP (OnLine Analytical Processing) approach of creating a special database designed for data analysis, a network-driven data analysis using graph databases, in our case. Beyond an initial ETL (Extract, Transform and Load) approach, we are facing the challenge of migrating the data from the relational to the graph database, managing a dynamic coexistence and evolution of both, not supported by related work. This work is motivated by a joint research involving network-driven data analysis over the FishBase global information system. We present a novel approach to analyzing the connections among thousands of identification keys and species and to linking local data to third party knowledge bases on the web.},
address = {Munich, Germany},
author = {Patrícia Cavoto and Victor Cardo and Régine Vignes Lebbe and André Santanchè},
booktitle = {2015 IEEE 11th International Conference on e-Science (e-Science 2015)},
date = {2015-08-31},
keyword = {Biodiversity Information Systems, graph database, network topology analysis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/Cavoto2015.pdf},
pages = {177 - 186},
publisher = {IEEE},
title = {FishGraph: A Network-Driven Data Analysis},
year = {2015}
}
There are a lot of data about biodiversity stored in different database models and most of them are relational. Recent research shows the importance of links and network analysis to discover knowledge in existing data. However, the relational model was not designed to address problems in which the links between data have the same importance as the data -- a common scenario in the biodiversity area. Moreover, the Linked Data and Semantic Web efforts empowered the fast growth of open knowledge repositories on the web, mainly in the RDF (Resource Description Framework) graph model. The flexible graph database model contrasts with the rigid relational model and is also suitable for data analysis focusing on links and the network topology, e.g., a connected component analysis. Our research is inspired by the data OLAP (OnLine Analytical Processing) approach of creating a special database designed for data analysis, a network-driven data analysis using graph databases, in our case. Beyond an initial ETL (Extract, Transform and Load) approach, we are facing the challenge of migrating the data from the relational to the graph database, managing a dynamic coexistence and evolution of both, not supported by related work. This work is motivated by a joint research involving network-driven data analysis over the FishBase global information system. We present a novel approach to analyzing the connections among thousands of identification keys and species and to linking local data to third party knowledge bases on the web.
|
Bernardo, Ivelize Rocha;
Borges, Michela;
Baranauskas, Maria Cecília Calani;
Santanchè, André
Interpretation of Construction Patterns for Biodiversity Spreadsheets (article)
Enterprise Information Systems,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Biodiversity data integration, Pattern recognition, Semantic mapping, Spreadsheet interpretation
)
@article{Bernardo2015,
abstract = {Spreadsheets are widely adopted as “popular databases”, where authors shape their solutions interactively. Although spreadsheets are easily adaptable by the author, their informal schemas cannot be automatically interpreted by machines to integrate data across independent spreadsheets. In biology, we observed a significant amount of biodiversity data in spreadsheets treated as isolated entities with different tabular organizations, but with high potential for data articulation. In order to automatically interpret these spreadsheets we exploit construction patterns followed by users in the biodiversity domain. This paper details evidences of such patterns and how they can lead to characterize the nature of a spreadsheet, as well as, its fields in a domain. It combines an automatic analysis of thousands of spreadsheets, collected on the Web, with results from a survey conducted with biologists. We propose a representation model to be used in automatic interpretation systems that captures these patterns.},
author = {Ivelize Rocha Bernardo and Michela Borges and Maria Cecília Calani Baranauskas and André Santanchè},
date = {2015-07-31},
journal = {Enterprise Information Systems},
keyword = {Biodiversity data integration, Pattern recognition, Semantic mapping, Spreadsheet interpretation},
link = {http://link.springer.com/chapter/10.1007/978-3-319-22348-3_22},
pages = {397-414},
title = {Interpretation of Construction Patterns for Biodiversity Spreadsheets},
volume = {227},
year = {2015}
}
Spreadsheets are widely adopted as “popular databases”, where authors shape their solutions interactively. Although spreadsheets are easily adaptable by the author, their informal schemas cannot be automatically interpreted by machines to integrate data across independent spreadsheets. In biology, we observed a significant amount of biodiversity data in spreadsheets treated as isolated entities with different tabular organizations, but with high potential for data articulation. In order to automatically interpret these spreadsheets we exploit construction patterns followed by users in the biodiversity domain. This paper details evidences of such patterns and how they can lead to characterize the nature of a spreadsheet, as well as, its fields in a domain. It combines an automatic analysis of thousands of spreadsheets, collected on the Web, with results from a survey conducted with biologists. We propose a representation model to be used in automatic interpretation systems that captures these patterns.
|
Batista, Lucas Oliveira
Apoio ao Estudo de Correlações entre Séries Temporais baseadas em Anotações Semânticas (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Semantic Annotation, Time Series Search, Time Series Semantic Annotation Model
)
@mastersthesis{Batista2015,
abstract = {Séries temporais são utilizadas em diversos domínios do conhecimento, por exemplo, economia, meteorologia e agricultura. Em várias situações, cientistas, muitas vezes, associam anotações a séries durante sua análise. Além disso, precisam buscar e correlacionar vários tipos de séries para estudar algum problema. Isto é dificultado não só pela heterogeneidade entre as séries, como também pela limitação dos mecanismos de busca por séries relevantes a uma correlação. As modalidades predominantes na busca por séries são baseadas ou em casamento de texto (anotações) ou em casamento de padrões. Não permitem buscas por séries que estejam relacionadas semanticamente. Diante deste cenário, esta dissertação propõe o TS³Annotation, um framework que usa anotações semânticas como base para permitir o estudo de correlações entre séries. As principais contribuições desta dissertação são: (1) um modelo de anotação semântica para séries temporais; (2) e o framework TS³Annotation que permite a especialistas anotar semanticamente séries, além de explorar o uso destas anotações como uma nova possibilidade na busca por séries temporais.},
author = {Lucas Oliveira Batista},
date = {2015-07-07},
keyword = {Semantic Annotation, Time Series Search, Time Series Semantic Annotation Model},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/08/LucasBatista_DissertacaoFinal.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Apoio ao Estudo de Correlações entre Séries Temporais baseadas em Anotações Semânticas},
year = {2015}
}
Séries temporais são utilizadas em diversos domínios do conhecimento, por exemplo, economia, meteorologia e agricultura. Em várias situações, cientistas, muitas vezes, associam anotações a séries durante sua análise. Além disso, precisam buscar e correlacionar vários tipos de séries para estudar algum problema. Isto é dificultado não só pela heterogeneidade entre as séries, como também pela limitação dos mecanismos de busca por séries relevantes a uma correlação. As modalidades predominantes na busca por séries são baseadas ou em casamento de texto (anotações) ou em casamento de padrões. Não permitem buscas por séries que estejam relacionadas semanticamente. Diante deste cenário, esta dissertação propõe o TS³Annotation, um framework que usa anotações semânticas como base para permitir o estudo de correlações entre séries. As principais contribuições desta dissertação são: (1) um modelo de anotação semântica para séries temporais; (2) e o framework TS³Annotation que permite a especialistas anotar semanticamente séries, além de explorar o uso destas anotações como uma nova possibilidade na busca por séries temporais.
|
Santo, Jacqueline Midlej do Espírito
Especificação e Detecção de Padrões Complexos de Variáveis Ambientais em Aplicações de Biodiversidade (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Complex Event Processing, Pattern Detection, Pattern Specification
)
@mastersthesis{santo2015,
abstract = {Aplicações de biodiversidade se caracterizam por necessitar de uma grande variedade de dados ambientais em múltiplas escalas. Este contexto envolve uma enorme quantidade de dados gerados por fontes heterogêneas, sendo o fluxo de dados de sensores uma das principais fontes. Um problema em aberto neste contexto é como especificar e detectar cenários de interesse a partir de variáveis ambientais em múltiplas escalas, para facilitar aos cientistas a análise de fenômenos e correlações com dados coletados em campo. Para ajudar a solucionar o problema, a dissertação se baseia na teoria de Processamento de Eventos Complexos para permitir a especificação de cenários através de padrões e a detecção da ocorrência do cenário em tempo real. Nesta literatura, dados são tratados como eventos e padrões são descritos pelas especificações de eventos e seus relacionamentos. Linguagens de eventos, no entanto, não consideram aspectos espaciais (necessários em biodiversidade) e a composição de eventos é limitada. Tendo em vista esse contexto, a dissertação propõe uma linguagem baseada em lógica para que cientistas especifiquem cenários de interesse. Esses cenários são baseados em composição de eventos complexos.
As principais contribuições da dissertação são: proposta da arquitetura de um framework para detecção de eventos complexos, que estende o trabalho de Koga 2013; um modelo de dados para representar eventos em biodiversidade; e uma linguagem para descrever padrões de forma hierárquica, explorando o relacionamento espacial e temporal entre os eventos em diferentes níveis de abstração.},
author = {Jacqueline Midlej do Espírito Santo},
date = {2015-07-06},
keyword = {Complex Event Processing, Pattern Detection, Pattern Specification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/07/JacquelineMidlej-Dissertacao.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Especificação e Detecção de Padrões Complexos de Variáveis Ambientais em Aplicações de Biodiversidade},
year = {2015}
}
Aplicações de biodiversidade se caracterizam por necessitar de uma grande variedade de dados ambientais em múltiplas escalas. Este contexto envolve uma enorme quantidade de dados gerados por fontes heterogêneas, sendo o fluxo de dados de sensores uma das principais fontes. Um problema em aberto neste contexto é como especificar e detectar cenários de interesse a partir de variáveis ambientais em múltiplas escalas, para facilitar aos cientistas a análise de fenômenos e correlações com dados coletados em campo. Para ajudar a solucionar o problema, a dissertação se baseia na teoria de Processamento de Eventos Complexos para permitir a especificação de cenários através de padrões e a detecção da ocorrência do cenário em tempo real. Nesta literatura, dados são tratados como eventos e padrões são descritos pelas especificações de eventos e seus relacionamentos. Linguagens de eventos, no entanto, não consideram aspectos espaciais (necessários em biodiversidade) e a composição de eventos é limitada. Tendo em vista esse contexto, a dissertação propõe uma linguagem baseada em lógica para que cientistas especifiquem cenários de interesse. Esses cenários são baseados em composição de eventos complexos. As principais contribuições da dissertação são: proposta da arquitetura de um framework para detecção de eventos complexos, que estende o trabalho de Koga 2013; um modelo de dados para representar eventos em biodiversidade; e uma linguagem para descrever padrões de forma hierárquica, explorando o relacionamento espacial e temporal entre os eventos em diferentes níveis de abstração.
|
Beserra, Renato
Quality Flow: a collaborative quality-aware platform for experiments in eScience (mastersthesis)
Universidade Estadual de Campinas - UNICAMP,
mastersthesis,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Data quality
)
@mastersthesis{Beserra2015,
abstract = {Many scientific research procedures rely upon the analysis of data obtained from heterogeneous sources. The validity of the research results depends, among others, on the quality of data. Data quality is a topic that has pervaded computer science research for decades. Though there are many proposals for data quality assessment, there are still open problems such as mechanisms to support flexible quality assessment and ways to derive data quality. The goal of this dissertation is to work on these issues. The main contribution of this dissertation is the proposal of QualityFlow: a quality-aware collaborative platform for experiments in eScience. The following contributions were accomplished: to support the creation of quality-aware scientific workflows, allowing the addition of quality attributes to workflows, while at the same time letting distinct users define their specific quality metrics for the same workflow; to allow users to keep track of different quality assessments for a given process, thereby providing insights into the actual value of data and workflow; and to allow scientists to customize data quality dimensions and quality metrics collaboratively. QualityFlow was developed as a web prototype, and executed in two experiments - one based upon a real problem and the other on a sample workflow.},
author = {Renato Beserra},
date = {2015-06-12},
keyword = {Data quality},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/TeseRenatoBeserra.pdf},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Quality Flow: a collaborative quality-aware platform for experiments in eScience},
year = {2015}
}
Many scientific research procedures rely upon the analysis of data obtained from heterogeneous sources. The validity of the research results depends, among others, on the quality of data. Data quality is a topic that has pervaded computer science research for decades. Though there are many proposals for data quality assessment, there are still open problems such as mechanisms to support flexible quality assessment and ways to derive data quality. The goal of this dissertation is to work on these issues. The main contribution of this dissertation is the proposal of QualityFlow: a quality-aware collaborative platform for experiments in eScience. The following contributions were accomplished: to support the creation of quality-aware scientific workflows, allowing the addition of quality attributes to workflows, while at the same time letting distinct users define their specific quality metrics for the same workflow; to allow users to keep track of different quality assessments for a given process, thereby providing insights into the actual value of data and workflow; and to allow scientists to customize data quality dimensions and quality metrics collaboratively. QualityFlow was developed as a web prototype, and executed in two experiments - one based upon a real problem and the other on a sample workflow.
|
Creus-Tomàs, Jordi;
Faria, Fabio Augusto;
Esquerdo, Júlio César Dalla Mora;
Coutinho, Alexandre Camargo;
Medeiros, Claudia Bauzer
SiRCub -- Brazilian Agricultural Crop Recognition System (conference)
XVII Simpósio Brasileiro de Sensoriamento Remoto,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Conference, crop classification, LULC, NDVI, séries temporais, SVM, time series, Timesat
)
@conference{Creus-Tomas2015,
abstract = {This paper presents a novel approach to classify agricultural crops using NDVI time series. The novelty lies in i) extracting a set of features from the each and every NDVI curve, and ii) using them to train a crop classification model using a Support Vector Machine (SVM). Specifically, we use the TIMESAT program package to: 1) smooth the time series, 2) decompose them into agricultural seasons–a season is the period between sowing and harvesting–, and 3) extract the features for each season. The 11 crop features we extract include the start and end of season, its amplitude, and the curve gradients of the sprouting and senescence periods, among others. Once we have the collection of features, they are fed into an SVM system –we use the LIBSVM library–, together with a collection of annotations about the land use of the corresponding time series. These annotations represent the type of crop for a given location and agricultural season, and they are provided by the specialists of the Embrapa. As a result we obtain a classification model that allows for identifying different crop classes. Our methodology is generic and can be applied to a variety of regions and crop types. We have develop a system called SIRCUB (Sistema de Reconhecimento de Culturas brasileiro), which implements such methodology. Thus, we describe in this paper the architecture of the system and the crop model learning methodology.},
author = {Jordi Creus-Tomàs and Fabio Augusto Faria and Júlio César Dalla Mora Esquerdo and Alexandre Camargo Coutinho and Claudia Bauzer Medeiros},
booktitle = {XVII Simpósio Brasileiro de Sensoriamento Remoto},
date = {2015-04-25},
keyword = {Conference, crop classification, LULC, NDVI, séries temporais, SVM, time series, Timesat},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/sircub_sbsr1.pdf},
title = {SiRCub -- Brazilian Agricultural Crop Recognition System},
year = {2015}
}
This paper presents a novel approach to classify agricultural crops using NDVI time series. The novelty lies in i) extracting a set of features from the each and every NDVI curve, and ii) using them to train a crop classification model using a Support Vector Machine (SVM). Specifically, we use the TIMESAT program package to: 1) smooth the time series, 2) decompose them into agricultural seasons–a season is the period between sowing and harvesting–, and 3) extract the features for each season. The 11 crop features we extract include the start and end of season, its amplitude, and the curve gradients of the sprouting and senescence periods, among others. Once we have the collection of features, they are fed into an SVM system –we use the LIBSVM library–, together with a collection of annotations about the land use of the corresponding time series. These annotations represent the type of crop for a given location and agricultural season, and they are provided by the specialists of the Embrapa. As a result we obtain a classification model that allows for identifying different crop classes. Our methodology is generic and can be applied to a variety of regions and crop types. We have develop a system called SIRCUB (Sistema de Reconhecimento de Culturas brasileiro), which implements such methodology. Thus, we describe in this paper the architecture of the system and the crop model learning methodology.
|
Gomes-Jr, Luiz;
Amann, Bernd;
Santanche, André
Beta-Algebra: Towards a Relational Algebra for Graph Analysis (conference)
Workshop Proceedings of the EDBT/ICDT 2015 Joint Conference,
GraphQ/EDBT 2015,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Gomes-Jr2015b,
abstract = {Graph analysis is an essential tool to understand natural and man-made networks, such as social networks, food webs, transportation infrastructures, etc. Although graph analysis has fomented the development of algorithms, visual tools, and distributed processing frameworks, there is still little support for analysis at the query language level. Current graph query languages are mostly concerned with flexible matching of subgraphs, while graph processing frameworks are mostly concerned with fast parallel execution of instructions. Our goal is to provide analysis capabilities at the language level, allowing more interactive and explorative query-based analysis. In this paper, we present our ongoing efforts towards a relational algebra extension that offers an operator for graph-based data aggregation. The beta (β) operator is composed of four suboperators, which are used to control the path-based aggregations. The β-algebra allows seamless composition of queries that mix relational and graph-based aspects. Here we introduce our current algebra and provide examples of its use. We also show how we are using the analysis strategy in query scenarios. Since the algebra-based query scenario allows for execution plan rewritings, we also discuss our first efforts on equivalence rules for query optimization.},
author = {Luiz Gomes-Jr and Bernd Amann and André Santanche},
booktitle = {Workshop Proceedings of the EDBT/ICDT 2015 Joint Conference},
date = {2015-03-18},
journal = {GraphQ/EDBT 2015},
keyword = {Conference},
link = {http://ceur-ws.org/Vol-1330/paper-26.pdf},
title = {Beta-Algebra: Towards a Relational Algebra for Graph Analysis},
year = {2015}
}
Graph analysis is an essential tool to understand natural and man-made networks, such as social networks, food webs, transportation infrastructures, etc. Although graph analysis has fomented the development of algorithms, visual tools, and distributed processing frameworks, there is still little support for analysis at the query language level. Current graph query languages are mostly concerned with flexible matching of subgraphs, while graph processing frameworks are mostly concerned with fast parallel execution of instructions. Our goal is to provide analysis capabilities at the language level, allowing more interactive and explorative query-based analysis. In this paper, we present our ongoing efforts towards a relational algebra extension that offers an operator for graph-based data aggregation. The beta (β) operator is composed of four suboperators, which are used to control the path-based aggregations. The β-algebra allows seamless composition of queries that mix relational and graph-based aspects. Here we introduce our current algebra and provide examples of its use. We also show how we are using the analysis strategy in query scenarios. Since the algebra-based query scenario allows for execution plan rewritings, we also discuss our first efforts on equivalence rules for query optimization.
|
Gomes-Jr., Luiz;
Santanche, André
The Web Within: Leveraging Web Standards and Graph Analysis to Enable Application-Level Integration of Institutional Data (article)
Transactions on Large-Scale Data- and Knowledge-Centered Systems XIX,
2015.
(
Abstract |
Links |
BibTeX |
Tags:
Journal Paper
)
@article{Gomes-Jr.2015,
abstract = {The expansion of the Web and of our capacity of producing and storing information have had a profound impact on the way we organize, manipulate and share data. We have seen an increased specialization of database back-ends and data models to respond to modern application needs: text indexing engines organize unstructured data, standards and models were created to support the Semantic Web, Big Data requirements stimulated an explosion of data representation and manipulation models. This complex and heterogeneous environment demands unified strategies that enable data integration and, especially, cross-application, expressive querying. Here we present a new approach for the integration of structured and unstructured data within organizations. Our solution is based on the Complex Data Management System (CDMS), a system being developed to handle data typical of complex networks. The CDMS enables a relationship-centric interaction with data that brings many advantages to the institutional data integration scenario, allowing applications to rely on common models for data querying and manipulation. In our framework, diverse data models are integrated in a unifying RDF graph. A novel query model allows the combination of concepts from information retrieval, databases, and complex networks into a declarative query language that extends SPARQL. This query language enables flexible correlation queries over the unified data, enabling support for a wide range of applications such as CMSs, recommendation systems, social networks, etc. We also introduce Mappers, a data management mechanism that simplifies the integration of heterogeneous data and that is integrated in the query language for further flexibility. Experimental results from real data demonstrate the viability of our approach.},
author = {Luiz Gomes-Jr. and André Santanche},
date = {2015-02-24},
journal = {Transactions on Large-Scale Data- and Knowledge-Centered Systems XIX},
keyword = {Journal Paper},
link = {http://link.springer.com/chapter/10.1007%2F978-3-662-46562-2_2},
pages = {26-54},
title = {The Web Within: Leveraging Web Standards and Graph Analysis to Enable Application-Level Integration of Institutional Data},
volume = {8990},
year = {2015}
}
The expansion of the Web and of our capacity of producing and storing information have had a profound impact on the way we organize, manipulate and share data. We have seen an increased specialization of database back-ends and data models to respond to modern application needs: text indexing engines organize unstructured data, standards and models were created to support the Semantic Web, Big Data requirements stimulated an explosion of data representation and manipulation models. This complex and heterogeneous environment demands unified strategies that enable data integration and, especially, cross-application, expressive querying. Here we present a new approach for the integration of structured and unstructured data within organizations. Our solution is based on the Complex Data Management System (CDMS), a system being developed to handle data typical of complex networks. The CDMS enables a relationship-centric interaction with data that brings many advantages to the institutional data integration scenario, allowing applications to rely on common models for data querying and manipulation. In our framework, diverse data models are integrated in a unifying RDF graph. A novel query model allows the combination of concepts from information retrieval, databases, and complex networks into a declarative query language that extends SPARQL. This query language enables flexible correlation queries over the unified data, enabling support for a wide range of applications such as CMSs, recommendation systems, social networks, etc. We also introduce Mappers, a data management mechanism that simplifies the integration of heterogeneous data and that is integrated in the query language for further flexibility. Experimental results from real data demonstrate the viability of our approach.
|
2014 |
Santo, Jacqueline Midlej do Espírito;
Medeiros, Claudia Bauzer
Complex Pattern Detection and Specification to Support Biodiversity Applications (conference)
Proc of SBBD 2014 - WTDBD,
Sociedade Brasileira de Computação (SBC),
Curitiba -PR,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Complex Event Processing, Conference, Multiscale, Pattern Specification
)
@conference{SantoSBBD2014,
abstract = {Biodiversity scientists often need to define and detect scenarios of interest from data streams delivered from meteorological sensors. For example, scenarios such deforestation or forest fire need to be detected in order to reduce impacts over the environment. Such data streams are characterized by their heterogeneity across spatial and temporal scales, which hampers detection of events and construction of scenarios. To help scientists in this task, this work proposes the use of the theory of Complex Event Processing (CEP) to define and detect complex event patterns in this context. The two main contributions focus on the specification of events and patterns for the biodiversity context and on the mechanism to detect these patterns. The first one requires to extend an Event Processing Language (EPL) to include spatial relationships in the pattern. The second one will extend Koga’s framework [Koga 2013], which integrates heterogeneous data sources, with the detection of complex patterns. This paper extends the short paper accepted for the Brazilian Workshop e-Science (BreSci) 2014 with the specification for events and patterns.},
address = {Curitiba -PR},
author = {Jacqueline Midlej do Espírito Santo and Claudia Bauzer Medeiros},
booktitle = {Proc of SBBD 2014 - WTDBD},
date = {2014-10-08},
editor = {Mirela Moto et al},
issn = {2316-5170},
keyword = {Complex Event Processing, Conference, Multiscale, Pattern Specification},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/wtd_sbbd_v4-2.pdf},
note = {Short-paper},
pages = {288-294},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Complex Pattern Detection and Specification to Support Biodiversity Applications},
year = {2014}
}
Biodiversity scientists often need to define and detect scenarios of interest from data streams delivered from meteorological sensors. For example, scenarios such deforestation or forest fire need to be detected in order to reduce impacts over the environment. Such data streams are characterized by their heterogeneity across spatial and temporal scales, which hampers detection of events and construction of scenarios. To help scientists in this task, this work proposes the use of the theory of Complex Event Processing (CEP) to define and detect complex event patterns in this context. The two main contributions focus on the specification of events and patterns for the biodiversity context and on the mechanism to detect these patterns. The first one requires to extend an Event Processing Language (EPL) to include spatial relationships in the pattern. The second one will extend Koga’s framework [Koga 2013], which integrates heterogeneous data sources, with the detection of complex patterns. This paper extends the short paper accepted for the Brazilian Workshop e-Science (BreSci) 2014 with the specification for events and patterns.
|
Batista, Lucas Oliveira;
Medeiros, Claudia Bauzer
Searching Time Series via Semantic Annotations (conference)
Proc. SBBD 2014 - XIII Workshop de Teses e Dissertações em Banco de Dados (WTDBD),
Sociedade Brasileira de Computação (SBC),
Curitiba - PR, Brasil,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Semantic Annotation, Time Series Search, Time Series Semantic Annotation Model
)
@conference{BatistaMedeiros2014b,
abstract = {Time series are used in several domains of knowledge. During their analysis, experts often create or analyze associations between time series and annotations. In order to study a problem, for example, patient behavior or crop patterns, experts need to search and correlate several time series. However, finding appropriate series related with a problem is a difficult task. Search is usually performed using a few parameters, such as series geographic location. Annotations may be used to help the search using string match. Given this scenario, this paper discusses a work in progress to design and partially develop a software framework to search time series via semantic annotations. It will support experts in the correlation of time series, foster collaboration among experts, and allow the use of Linked Data concepts to aggregate knowledge to content. This paper extends the short paper accepted for BRESCI - Brazilian Workshop e-Science 2014. The extensions include a time series semantic annotation model, implementation details, and a longer theoretical related work section.},
address = {Curitiba - PR, Brasil},
author = {Lucas Oliveira Batista and Claudia Bauzer Medeiros},
booktitle = {Proc. SBBD 2014 - XIII Workshop de Teses e Dissertações em Banco de Dados (WTDBD)},
date = {2014-10-08},
issn = {2316-5170},
keyword = {Semantic Annotation, Time Series Search, Time Series Semantic Annotation Model},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/LucasBatista_WTDBDFInal1.pdf},
note = {Short-paper},
pages = {339 - 345},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Searching Time Series via Semantic Annotations},
year = {2014}
}
Time series are used in several domains of knowledge. During their analysis, experts often create or analyze associations between time series and annotations. In order to study a problem, for example, patient behavior or crop patterns, experts need to search and correlate several time series. However, finding appropriate series related with a problem is a difficult task. Search is usually performed using a few parameters, such as series geographic location. Annotations may be used to help the search using string match. Given this scenario, this paper discusses a work in progress to design and partially develop a software framework to search time series via semantic annotations. It will support experts in the correlation of time series, foster collaboration among experts, and allow the use of Linked Data concepts to aggregate knowledge to content. This paper extends the short paper accepted for BRESCI - Brazilian Workshop e-Science 2014. The extensions include a time series semantic annotation model, implementation details, and a longer theoretical related work section.
|
Cavoto, Patrícia;
Santanchè, André
Arquitetura Híbrida de Integração entre Banco de Dados Relacional e de Grafos (conference)
Proc. SBBD 2014 - XIII Workshop de Teses e Dissertações em Banco de Dados (WTDBD),
Sociedade Brasileira de Computação (SBC),
Curitiba - PR, Brasil,
Proceedings of SBBD 2014 - WTDBD, Sociedade Brasileira de Computação (SBC), Curitiba -PR, 2014, ISSN: 2316-5170, (Short-paper).,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
banco de dados híbrido, integração de bases, modelo de grafos, modelo relacional
)
@conference{Cavoto2014,
abstract = {A complexidade e o volume dos relacionamentos entre as informações, bem como a necessidade de manter e integrar dados de estruturas heterogêneas aumentam exponencialmente a cada dia. Isto é particularmente importante no contexto de eScience, especialmente biodiversidade, área de interesse deste projeto – em que as relações são fundamentais nas análises. Neste contexto, o modelo de banco de dados de grafos pode apresentar-se como uma abordagem mais apropriada e eficiente no gerenciamento e recuperação destas informações. Em contrapartida, há um grande legado de sistemas que utilizam bancos de dados relacionais, que cumprem um papel fundamental em diversas tarefas. Apresentamos então neste trabalho uma proposta de arquitetura híbrida de integração que permite a convivência dos modelos relacional e de grafos em sua forma nativa, reduzindo o impacto de adaptações em bases relacionais preexistentes e explorando as vantagens de cada modelo nativo nas operações de gerenciamento e recuperação.},
address = {Curitiba - PR, Brasil},
author = {Patrícia Cavoto and André Santanchè},
booktitle = {Proc. SBBD 2014 - XIII Workshop de Teses e Dissertações em Banco de Dados (WTDBD)},
date = {2014-10-06},
issn = {2316-5170},
journal = {Proceedings of SBBD 2014 - WTDBD, Sociedade Brasileira de Computação (SBC), Curitiba -PR, 2014, ISSN: 2316-5170, (Short-paper).},
keyword = {banco de dados híbrido, integração de bases, modelo de grafos, modelo relacional},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/02/Cavoto2014.pdf},
note = {Short-paper},
pages = {274-280},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Arquitetura Híbrida de Integração entre Banco de Dados Relacional e de Grafos},
year = {2014}
}
A complexidade e o volume dos relacionamentos entre as informações, bem como a necessidade de manter e integrar dados de estruturas heterogêneas aumentam exponencialmente a cada dia. Isto é particularmente importante no contexto de eScience, especialmente biodiversidade, área de interesse deste projeto – em que as relações são fundamentais nas análises. Neste contexto, o modelo de banco de dados de grafos pode apresentar-se como uma abordagem mais apropriada e eficiente no gerenciamento e recuperação destas informações. Em contrapartida, há um grande legado de sistemas que utilizam bancos de dados relacionais, que cumprem um papel fundamental em diversas tarefas. Apresentamos então neste trabalho uma proposta de arquitetura híbrida de integração que permite a convivência dos modelos relacional e de grafos em sua forma nativa, reduzindo o impacto de adaptações em bases relacionais preexistentes e explorando as vantagens de cada modelo nativo nas operações de gerenciamento e recuperação.
|
Santanchè, André;
Longo, João Sávio C.;
Jomier, Geneviève;
Zam, Michel;
Medeiros, Claudia Bauzer
Multi-focus Research and Geospatial Data - anthropocentric concerns (article)
JIDM - Journal of Information and Data Management,
2,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Geospatial data, Multiple aspects, Multiscale, Multiscale views, Version
)
@article{SantancheMedeirosJomier2014,
abstract = {Work on multiscale issues presents countless challenges that have been long attacked by GIScience researchers. Research is usually concentrated in one of two directions - new data models to support handling multiple scales, or data structures and algorithms to process data across scales. Complementary implementation aspects are concerned with generalization (and/or virtualization of distinct scales), or with linking entities of interest across scales (e.g., using bottom-up implementation of specific structures, without relying on any specific DBMS). However, researchers seldom take into account the fact that multiscale scenarios are increasingly constructed cooperatively, and require distinct perspectives of the world, in which each research group considers specific aspects of a problem. The combination of handling multiple scales at a time, and having multiple user perspectives per scale constitutes what we call multi-focus research. This paper presents our proposal to attack multi-focus scenarios, which considers distinct aspects of the problem of managing multiple scales, illustrated with examples of multiscale geospatial data. Our approach builds upon a specific database version model – the so-called multiversion MVDB – which has already been successfully implemented in several geospatial scenarios, being extended here to support multi-focus research. This extension was implemented and tested in a real world case study, briefly discussed here.},
author = {André Santanchè and João Sávio C. Longo and Geneviève Jomier and Michel Zam and Claudia Bauzer Medeiros},
date = {2014-09-18},
journal = {JIDM - Journal of Information and Data Management},
keyword = {Geospatial data, Multiple aspects, Multiscale, Multiscale views, Version},
link = {https://seer.lcc.ufmg.br/index.php/jidm/article/view/418/626},
number = {2},
pages = {146-160},
title = {Multi-focus Research and Geospatial Data - anthropocentric concerns},
volume = {5},
year = {2014}
}
Work on multiscale issues presents countless challenges that have been long attacked by GIScience researchers. Research is usually concentrated in one of two directions - new data models to support handling multiple scales, or data structures and algorithms to process data across scales. Complementary implementation aspects are concerned with generalization (and/or virtualization of distinct scales), or with linking entities of interest across scales (e.g., using bottom-up implementation of specific structures, without relying on any specific DBMS). However, researchers seldom take into account the fact that multiscale scenarios are increasingly constructed cooperatively, and require distinct perspectives of the world, in which each research group considers specific aspects of a problem. The combination of handling multiple scales at a time, and having multiple user perspectives per scale constitutes what we call multi-focus research. This paper presents our proposal to attack multi-focus scenarios, which considers distinct aspects of the problem of managing multiple scales, illustrated with examples of multiscale geospatial data. Our approach builds upon a specific database version model – the so-called multiversion MVDB – which has already been successfully implemented in several geospatial scenarios, being extended here to support multi-focus research. This extension was implemented and tested in a real world case study, briefly discussed here.
|
Santo, Jacqueline Midlej do Espírito;
Medeiros, Claudia Bauzer
Complex Pattern Detection and Specification from Multiscale Environmental Variables for Biodiversity Applications (conference)
Proc. of CSBC 2014 - BreSci,
Sociedade Brasileira de Computação (SBC),
Brasília - DF,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Complex Event Processing, Conference, Pattern Detection
)
@conference{SantoBRESCI2014,
abstract = {Biodiversity scientists often need to define and detect scenarios of interest from data streams concern meteorological sensors. Such streams are
characterized by their heterogeneity across spatial and temporal scales, which hampers construction of scenarios. To help them in this task, this paper proposes the use of the theory of Complex Event Processing (CEP) to detect complex event patterns in this context.},
address = {Brasília - DF},
author = {Jacqueline Midlej do Espírito Santo and Claudia Bauzer Medeiros},
booktitle = {Proc. of CSBC 2014 - BreSci},
date = {2014-07-31},
editor = {Eduardo Adilio Pelinson Alchieri and Priscila Solís Barreto},
issn = {2175-2761},
keyword = {Complex Event Processing, Conference, Pattern Detection},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/bresciFinal.pdf},
note = {Short-paper},
pages = {389-392},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Complex Pattern Detection and Specification from Multiscale Environmental Variables for Biodiversity Applications},
year = {2014}
}
Biodiversity scientists often need to define and detect scenarios of interest from data streams concern meteorological sensors. Such streams are characterized by their heterogeneity across spatial and temporal scales, which hampers construction of scenarios. To help them in this task, this paper proposes the use of the theory of Complex Event Processing (CEP) to detect complex event patterns in this context.
|
Batista, Lucas Oliveira;
Medeiros, Claudia Bauzer
Supporting the Study of Correlations between Time Series via Semantic Annotations (conference)
Proc. CSBC 2014 - VIII Brazilian e-Science Workshop (BRESCI),
Sociedade Brasileira de Computação (SBC),
Brasília - DF, Brasil,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Semantic Annotation, time series
)
@conference{BatistaMedeiros2014,
abstract = {This paper shows a work in progress to design and develop a software framework that supports experts in the correlation of time series. It will allow searching for time series via semantic annotations. Thereby fostering collaboration among experts, and aggregate knowledge to content.},
address = {Brasília - DF, Brasil},
author = {Lucas Oliveira Batista and Claudia Bauzer Medeiros},
booktitle = {Proc. CSBC 2014 - VIII Brazilian e-Science Workshop (BRESCI)},
date = {2014-07-31},
editor = {Eduardo Adilio Pelinson Alchieri and Priscila Solís Barreto},
issn = {2175-2761},
keyword = {Semantic Annotation, time series},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/LucasBatistaBresciFinal.pdf},
note = {Short-paper},
pages = {385-388},
publisher = {Sociedade Brasileira de Computação (SBC)},
title = {Supporting the Study of Correlations between Time Series via Semantic Annotations},
year = {2014}
}
This paper shows a work in progress to design and develop a software framework that supports experts in the correlation of time series. It will allow searching for time series via semantic annotations. Thereby fostering collaboration among experts, and aggregate knowledge to content.
|
Miranda, Eduardo;
Grand, Anaıs;
Lebbe, Régine Vignes;
Santanchè, André
Towards a Linked Biology - An integrated perspective of phenotypes and phylogenetic trees (conference)
10th International Conference on Data Integration in the Life Sciences,
Lisbon, Portugal,
10th International Conference on Data Integration in the Life Sciences,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Miranda2014b,
abstract = {A large number of studies in biology, including those involv- ing phylogenetic tree reconstruction, result in the production of a huge amount of data – e.g., phenotype descriptions, morphological data ma- trices, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge by crossing and comparing sev- eral pieces of information, not always linked and integrated. Our moti- vation stems from the idea of transforming these data into a network of relationships, looking for links among related elements and enhanc- ing the ability to solve more complex problems supported by machines. This work addresses this problem through a graph database model, link- ing and coupling phylogenetic trees and phenotype descriptions. In this paper we give an overview of an experiment exploiting the synergy of linked data sources to support biologists in data analysis, comparison and inferences.},
address = {Lisbon, Portugal},
author = {Eduardo Miranda and Anaıs Grand and Régine Vignes Lebbe and André Santanchè},
date = {2014-07-17},
journal = {10th International Conference on Data Integration in the Life Sciences},
keyword = {Conference},
link = {http://dils2014.inesc-id.pt/data/uploads/paper_38.pdf},
organization = {DILS 2014},
publisher = {10th International Conference on Data Integration in the Life Sciences},
title = {Towards a Linked Biology - An integrated perspective of phenotypes and phylogenetic trees},
year = {2014}
}
A large number of studies in biology, including those involv- ing phylogenetic tree reconstruction, result in the production of a huge amount of data – e.g., phenotype descriptions, morphological data ma- trices, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge by crossing and comparing sev- eral pieces of information, not always linked and integrated. Our moti- vation stems from the idea of transforming these data into a network of relationships, looking for links among related elements and enhanc- ing the ability to solve more complex problems supported by machines. This work addresses this problem through a graph database model, link- ing and coupling phylogenetic trees and phenotype descriptions. In this paper we give an overview of an experiment exploiting the synergy of linked data sources to support biologists in data analysis, comparison and inferences.
|
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Handling Multiple Foci in Graph Databases (conference)
Lecture Notes in Bioinformatics (LNBI) - Proceedings of 10th International Conference on Data Integration in the Life Sciences,
Lisboa, Portugal,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Daltio2014,
abstract = {Scientific research has become data-intensive and data-dependent, with distributed, multidisciplinary, teams creating and sharing their findings. Graph databases are being increasingly considered as a computational means to loosely integrate such data, in particular when relationships among data and the data itself are at the same importance level. However, a problem to be faced in this context is that of multiple foci – where a focus, here, is a perspective on the data, for a particular research team and context. This paper describes a conceptual framework for the construction of arbitrary foci on graph databases, to help solve this problem. The framework, under construction, is illustrated using examples based on needs of teams involved in biodiversity research.},
address = {Lisboa, Portugal},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
booktitle = {Lecture Notes in Bioinformatics (LNBI) - Proceedings of 10th International Conference on Data Integration in the Life Sciences},
date = {2014-07-17},
editor = {Springer International Publishing Switzerland},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/Handling-Multiple-Foci-in-Graph-Databases.pdf},
pages = {58-65},
title = {Handling Multiple Foci in Graph Databases},
volume = {8574},
year = {2014}
}
Scientific research has become data-intensive and data-dependent, with distributed, multidisciplinary, teams creating and sharing their findings. Graph databases are being increasingly considered as a computational means to loosely integrate such data, in particular when relationships among data and the data itself are at the same importance level. However, a problem to be faced in this context is that of multiple foci – where a focus, here, is a perspective on the data, for a particular research team and context. This paper describes a conceptual framework for the construction of arbitrary foci on graph databases, to help solve this problem. The framework, under construction, is illustrated using examples based on needs of teams involved in biodiversity research.
|
Cugler, Daniel Cintra
Supporting the collection and curation of biological observation metadata (phdthesis)
Universidade Estadual de Campinas - UNICAMP,
phdthesis,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Biodiversity Information Systems
)
@phdthesis{Cugler2014,
abstract = {Biological observation databases contain information about the occurrence of an organism or set of organisms detected at a given place and time according to some methodology. Such databases store a variety of data, at multiple spatial and temporal scales, including images, maps, sounds, texts and so on. This priceless information can be used in a wide range of research initiatives, e.g., global warming, species behavior or food production. All such studies are based on analyzing the records themselves, and their metadata. Most times, analyses start from metadata, often used to index the observation records. However, given the nature of observation activities, metadata may suffer from quality problems, hampering such analyses. For example, there may be metadata gaps (e.g., missing attributes, or insufficient records). This can have serious effects: in biodiversity studies, for instance, metadata problems regarding a single species can affect the understanding not just of the species, but of wider ecological interactions. This thesis proposes a set of processes to help solve problems in metadata quality. While previous approaches concern one given aspect of the problem, the thesis provides an architecture and algorithms that encompass the whole cycle of managing biological observation metadata, which goes from acquiring data to retrieving database records. Our contributions are divided into two categories: (a) data enrichment and (b) data cleaning. Contributions in category (a) provide additional information for both missing attributes in existent records, and missing records for specific requirements. Our strategies use authoritative remote data sources and VGI (Volunteered Geographic Information) to enrich such metadata, providing missing information. Contributions in category (b) detect anomalies in biological observation metadata by performing spatial analyses that contrast location of the observations with authoritative geographic distribution maps. Thus, the main contributions are: (i) an architecture to retrieve biological observation records, which derives missing attributes by using external data sources; (ii) a geographical approach for anomaly detection and (iii) an approach for adaptive acquisition of VGI to fill out metadata gaps, using mobile devices and sensors. These contributions were validated by actual implementations, using as case study the challenges presented by the management of biological observation metadata of the Fonoteca Neotropical Jacques Vielliard (FNJV), one of the top 10 animal sound collections in the world.},
author = {Daniel Cintra Cugler},
date = {2014-05-08},
keyword = {Biodiversity Information Systems},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2017/01/CuglerDanielCintra_D.pdf},
note = {Supervisor Claudia Bauzer Medeiros},
school = {Universidade Estadual de Campinas - UNICAMP},
title = {Supporting the collection and curation of biological observation metadata},
year = {2014}
}
Biological observation databases contain information about the occurrence of an organism or set of organisms detected at a given place and time according to some methodology. Such databases store a variety of data, at multiple spatial and temporal scales, including images, maps, sounds, texts and so on. This priceless information can be used in a wide range of research initiatives, e.g., global warming, species behavior or food production. All such studies are based on analyzing the records themselves, and their metadata. Most times, analyses start from metadata, often used to index the observation records. However, given the nature of observation activities, metadata may suffer from quality problems, hampering such analyses. For example, there may be metadata gaps (e.g., missing attributes, or insufficient records). This can have serious effects: in biodiversity studies, for instance, metadata problems regarding a single species can affect the understanding not just of the species, but of wider ecological interactions. This thesis proposes a set of processes to help solve problems in metadata quality. While previous approaches concern one given aspect of the problem, the thesis provides an architecture and algorithms that encompass the whole cycle of managing biological observation metadata, which goes from acquiring data to retrieving database records. Our contributions are divided into two categories: (a) data enrichment and (b) data cleaning. Contributions in category (a) provide additional information for both missing attributes in existent records, and missing records for specific requirements. Our strategies use authoritative remote data sources and VGI (Volunteered Geographic Information) to enrich such metadata, providing missing information. Contributions in category (b) detect anomalies in biological observation metadata by performing spatial analyses that contrast location of the observations with authoritative geographic distribution maps. Thus, the main contributions are: (i) an architecture to retrieve biological observation records, which derives missing attributes by using external data sources; (ii) a geographical approach for anomaly detection and (iii) an approach for adaptive acquisition of VGI to fill out metadata gaps, using mobile devices and sensors. These contributions were validated by actual implementations, using as case study the challenges presented by the management of biological observation metadata of the Fonoteca Neotropical Jacques Vielliard (FNJV), one of the top 10 animal sound collections in the world.
|
Bernardo, Ivelize Rocha;
Santanchè, André;
Baranauskas, Maria Cecília Calani
Automatic Interpretation Biodiversity Spreadsheets Based on Recognition of Construction Patterns (conference)
Proceedings of the 16th International Conference on Enterprise Information Systems (ICEIS 2014),
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Biodiversity data integration, Information Integration, Patterns Recognition, Semantic mapping, Spreadsheet interpretation
)
@conference{Bernardo2014,
abstract = {Spreadsheets are widely adopted as "popular databases", where authors shape their solutions interactively. Although spreadsheets have characteristics that facilitate their adaptation by the author, they are not designed to integrate data across independent spreadsheets. In biology, we observed a significant amount of biodiversity data in spreadsheets treated as isolated entities with different tabular organizations, but with high potential for data articulation. In order to promote interoperability among these spreadsheets, we propose in this paper a technique based on pattern recognition of spreadsheets belonging to the biodiversity domain. It can be exploited to identify the spreadsheet in a higher level of abstraction – e.g., it is possible to identify the nature a spreadsheet as catalog or collection of specimen – improving the interoperability process. The paper details evidences of construction patterns of spreadsheets as well as proposes a semantic representation to them.},
author = {Ivelize Rocha Bernardo and André Santanchè and Maria Cecília Calani Baranauskas},
booktitle = {Proceedings of the 16th International Conference on Enterprise Information Systems (ICEIS 2014)},
date = {2014-04-27},
keyword = {Biodiversity data integration, Information Integration, Patterns Recognition, Semantic mapping, Spreadsheet interpretation},
link = {http://www.scitepress.org/DigitalLibrary/PublicationsDetail.aspx?ID=oEnR7oWqnHw=&t=1},
pages = {57-68},
title = {Automatic Interpretation Biodiversity Spreadsheets Based on Recognition of Construction Patterns},
year = {2014}
}
Spreadsheets are widely adopted as 'popular databases', where authors shape their solutions interactively. Although spreadsheets have characteristics that facilitate their adaptation by the author, they are not designed to integrate data across independent spreadsheets. In biology, we observed a significant amount of biodiversity data in spreadsheets treated as isolated entities with different tabular organizations, but with high potential for data articulation. In order to promote interoperability among these spreadsheets, we propose in this paper a technique based on pattern recognition of spreadsheets belonging to the biodiversity domain. It can be exploited to identify the spreadsheet in a higher level of abstraction – e.g., it is possible to identify the nature a spreadsheet as catalog or collection of specimen – improving the interoperability process. The paper details evidences of construction patterns of spreadsheets as well as proposes a semantic representation to them.
|
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
Evaluate, Reorganize and Share: An Approach to Dynamically Organize Digital Hierarchies (article)
International Journal of Metadata, Semantics and Ontologies,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Data integration, Data sharing, Organization, Organograph, Personal Information Management
)
@article{SenraMedeiros2014,
abstract = {We are overwhelmed and overloaded with the data deluge brought by the digital age. Hierarchies are pervasive cognitive patterns that allow us to reorganize data and reduce the dimensionality of the information space to manageable levels (e.g., filesystems and navigational menus). In spite of their widespread adoption, such hierarchies can be improved to cope with the present needs of data sharing and reuse. First, we seldom use mechanisms to evaluate how well they partition the information space. Second, we build static and content-driven hierarchies instead of dynamic and context-driven (i.e., task-driven) ones. Third, we use ad hoc and implicit hierarchization criteria, whereas they should be explicit and shareable. This paper discusses the problems related to the construction of hierarchies, and presents a conceptual framework to turn them into reconfigurable and shareable artifacts. Moreover, it explores how dynamically reconfigurable hierarchies can better cope with the multi-faceted nature of content, illustrating these principles through a tool that validates our proposal.},
author = {Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
date = {2014-04-16},
journal = {International Journal of Metadata, Semantics and Ontologies},
keyword = {Data integration, Data sharing, Organization, Organograph, Personal Information Management},
link = {http://link.springer.com/article/10.1007%2Fs13740-014-0035-7},
pages = {15-28},
title = {Evaluate, Reorganize and Share: An Approach to Dynamically Organize Digital Hierarchies},
volume = {9},
year = {2014}
}
We are overwhelmed and overloaded with the data deluge brought by the digital age. Hierarchies are pervasive cognitive patterns that allow us to reorganize data and reduce the dimensionality of the information space to manageable levels (e.g., filesystems and navigational menus). In spite of their widespread adoption, such hierarchies can be improved to cope with the present needs of data sharing and reuse. First, we seldom use mechanisms to evaluate how well they partition the information space. Second, we build static and content-driven hierarchies instead of dynamic and context-driven (i.e., task-driven) ones. Third, we use ad hoc and implicit hierarchization criteria, whereas they should be explicit and shareable. This paper discusses the problems related to the construction of hierarchies, and presents a conceptual framework to turn them into reconfigurable and shareable artifacts. Moreover, it explores how dynamically reconfigurable hierarchies can better cope with the multi-faceted nature of content, illustrating these principles through a tool that validates our proposal.
|
Vilar, Bruno Siqueira Campos Mendonça
Context driven workflow adaptation applied to healthcare planning (phdthesis)
Instituto de Computação - Universidade Estadual de Campinas (UNICAMP),
Campinas - SP,
phdthesis,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Computer software, Health planning, Hospitals, Workflow, Workflow management systems
)
@phdthesis{vilar2014,
abstract = {Workflow Management Systems (WfMS) are used to manage the execution of processes, improving efficiency and efficacy of the procedure in use. The driving forces behind the adoption and development of WfMSs are business and scientific applications. Associated research efforts resulted in consolidated mechanisms, consensual protocols and standards. In particular, a scientific WfMS helps scientists to specify and run distributed experiments. It provides several features that support activities within an experimental environment, such as providing flexibility to change workflow design and keeping provenance (and thus reproducibility) of experiments. On the other hand, barring a few research initiatives, WfMSs do not provide appropriate support to dynamic, context-based customization during run-time; on-the-fly adaptations usually require user intervention. This thesis is concerned with mending this gap, providing WfMSs with a context-aware mechanism to dynamically customize workflow execution. As a result, we designed and developed DynFlow - a software architecture that allows such a customization, applied to a specific domain: healthcare planning. This application domain was chosen because it is a very good example of context-sensitive customization. Indeed, healthcare procedures constantly undergo unexpected changes that may occur during a treatment, such as a patient¿s reaction to a medicine. To meet dynamic customization demands, healthcare planning research has developed semi-automated techniques to support fast changes of the careflow steps according to a patient's state and evolution. One such technique is Computer-Interpretable Guidelines (CIG), whose most prominent member is the Task-Network Model (TNM) -- a rule based approach able to build on the fly a plan according to the context. Our research led us to conclude that CIGs do not support features required by health professionals, such as distributed execution, provenance and extensibility, which are available from WfMSs. In other words, CIGs and WfMSs have complementary characteristics, and both are directed towards execution of activities. Given the above facts, the main contributions of the thesis are the following: (a) the design and development of DynFlow, whose underlying model blends TNM characteristics with WfMS; (b) the characterization of the main advantages and disadvantages of CIG models and workflow models; and (c) the implementation of a prototype, based on ontologies, applied to nursing care. Ontologies are used as a solution to enable interoperability across distinct SWfMS internal representations, as well as to support distinct healthcare vocabularies and procedures.},
address = {Campinas - SP},
author = {Bruno Siqueira Campos Mendonça Vilar},
date = {2014-04-14},
keyword = {Computer software, Health planning, Hospitals, Workflow, Workflow management systems},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/VilarBrunoSiqueiraCamposMendonça_D.pdf},
school = {Instituto de Computação - Universidade Estadual de Campinas (UNICAMP)},
title = {Context driven workflow adaptation applied to healthcare planning},
year = {2014}
}
Workflow Management Systems (WfMS) are used to manage the execution of processes, improving efficiency and efficacy of the procedure in use. The driving forces behind the adoption and development of WfMSs are business and scientific applications. Associated research efforts resulted in consolidated mechanisms, consensual protocols and standards. In particular, a scientific WfMS helps scientists to specify and run distributed experiments. It provides several features that support activities within an experimental environment, such as providing flexibility to change workflow design and keeping provenance (and thus reproducibility) of experiments. On the other hand, barring a few research initiatives, WfMSs do not provide appropriate support to dynamic, context-based customization during run-time; on-the-fly adaptations usually require user intervention. This thesis is concerned with mending this gap, providing WfMSs with a context-aware mechanism to dynamically customize workflow execution. As a result, we designed and developed DynFlow - a software architecture that allows such a customization, applied to a specific domain: healthcare planning. This application domain was chosen because it is a very good example of context-sensitive customization. Indeed, healthcare procedures constantly undergo unexpected changes that may occur during a treatment, such as a patient¿s reaction to a medicine. To meet dynamic customization demands, healthcare planning research has developed semi-automated techniques to support fast changes of the careflow steps according to a patient's state and evolution. One such technique is Computer-Interpretable Guidelines (CIG), whose most prominent member is the Task-Network Model (TNM) -- a rule based approach able to build on the fly a plan according to the context. Our research led us to conclude that CIGs do not support features required by health professionals, such as distributed execution, provenance and extensibility, which are available from WfMSs. In other words, CIGs and WfMSs have complementary characteristics, and both are directed towards execution of activities. Given the above facts, the main contributions of the thesis are the following: (a) the design and development of DynFlow, whose underlying model blends TNM characteristics with WfMS; (b) the characterization of the main advantages and disadvantages of CIG models and workflow models; and (c) the implementation of a prototype, based on ontologies, applied to nursing care. Ontologies are used as a solution to enable interoperability across distinct SWfMS internal representations, as well as to support distinct healthcare vocabularies and procedures.
|
Sousa, Renato Beserra;
Cugler, Daniel Cintra;
Malaverri, Joana Gonzales E.;
Medeiros, Claudia Bauzer
A Provenance-Based Approach to Manage Long Term Preservation of Scientific Data (conference)
2014 IEEE 30th International Conference on Data Engineering Workshops (ICDEW),
978-1-4799-3481-2,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Data quality
)
@conference{SousaMedeiros2014,
abstract = {Long term preservation of scientific data goes beyond the data, and extends to metadata preservation and curation. While several researchers emphasize curation processes, our work is geared towards assessing the quality of scientific (meta)data. The rationale behind this strategy is that scientific data are often accessible via metadata - and thus ensuring metadata quality is a means to provide long term accessibility. This paper discusses our quality assessment architecture, presenting a case study on animal sound recording metadata. Our case study is an example of the importance of periodically assessing (meta)data quality, since knowledge about the world may evolve, and quality decrease with time, hampering long term preservation.},
author = {Renato Beserra Sousa and Daniel Cintra Cugler and Joana Gonzales E. Malaverri and Claudia Bauzer Medeiros},
booktitle = {2014 IEEE 30th International Conference on Data Engineering Workshops (ICDEW)},
date = {2014-03-06},
isbn = {978-1-4799-3481-2},
keyword = {Data quality},
link = {http://ieeexplore.ieee.org/document/6818316/},
title = {A Provenance-Based Approach to Manage Long Term Preservation of Scientific Data},
year = {2014}
}
Long term preservation of scientific data goes beyond the data, and extends to metadata preservation and curation. While several researchers emphasize curation processes, our work is geared towards assessing the quality of scientific (meta)data. The rationale behind this strategy is that scientific data are often accessible via metadata - and thus ensuring metadata quality is a means to provide long term accessibility. This paper discusses our quality assessment architecture, presenting a case study on animal sound recording metadata. Our case study is an example of the importance of periodically assessing (meta)data quality, since knowledge about the world may evolve, and quality decrease with time, hampering long term preservation.
|
Miranda, Eduardo;
Santanchè, André
Linked biology technical aspects - linking phenotypes and phylogenetic trees. (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-14-06,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Miranda2014,
abstract = {A large number of studies in biology, including those involving phylogenetic trees reconstruction, result in the production of a huge amount of data - e.g., phenotype descriptions, morphological data matrices, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge crossing and comparing several pieces of information, not always linked and integrated. Ontologies are one of the promising choices to address this challenge. However, the existing digital phenotypic descriptions are stored in semi-structured formats, making extensive use of natural language. This technical report is related to a research developed by us [] to addresses this problem, adding an intermediate step between semi-structured phenotypic descriptions and ontologies. It remodels semi-structured descriptions to a graph abstraction in which the data are linked. Graph transformations subsidize the transition from semi-structured data representation to a more formalized representation with ontologies. The present technical report drills down implementation details of our system. It provides a module to ingest phylogenetic trees and phenotype descriptions - represented in semi-structured formats - into a graph database. Additionally, two approaches to combine distinct data sources are presented and an algorithm to trace changes in phylogenetic traits of trees.},
author = {Eduardo Miranda and André Santanchè},
date = {2014-02-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.ic.unicamp.br/~reltech/2014/14-06.pdf},
number = {IC-14-06},
pages = {56},
title = {Linked biology technical aspects - linking phenotypes and phylogenetic trees.},
type = {Technical Report},
year = {2014}
}
A large number of studies in biology, including those involving phylogenetic trees reconstruction, result in the production of a huge amount of data - e.g., phenotype descriptions, morphological data matrices, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge crossing and comparing several pieces of information, not always linked and integrated. Ontologies are one of the promising choices to address this challenge. However, the existing digital phenotypic descriptions are stored in semi-structured formats, making extensive use of natural language. This technical report is related to a research developed by us [] to addresses this problem, adding an intermediate step between semi-structured phenotypic descriptions and ontologies. It remodels semi-structured descriptions to a graph abstraction in which the data are linked. Graph transformations subsidize the transition from semi-structured data representation to a more formalized representation with ontologies. The present technical report drills down implementation details of our system. It provides a module to ingest phylogenetic trees and phenotype descriptions - represented in semi-structured formats - into a graph database. Additionally, two approaches to combine distinct data sources are presented and an algorithm to trace changes in phylogenetic traits of trees.
|
Malaverri, Joana E. Gonzales;
Santanchè, André;
Medeiros, Claudia Bauzer
A provenance-based approach to evaluate data quality in eScience (article)
Inderscience,
International Journal of Metadata, Semantics and Ontologies,
1,
2014.
(
Abstract |
Links |
BibTeX |
Tags:
Data quality
)
@article{Malaverri2014,
abstract = {Data quality is growing in relevance as a research topic. Quality assessment has been progressively incorporated in many business environments, and in software engineering practices. eScience environments, however, because of the multiplicity and heterogeneity of data sources and scientific experts involved in a given problem, complicate data quality assessment. This paper deals with the evaluation of the quality of data managed by eScience applications. Our approach is based on data provenance, i.e. the history of the origins and transformations applied to a given data product. Our contributions include a the specification of a framework to track data provenance and use it to derive quality information, b a model for data provenance based on the Open Provenance Model, and c a methodology to evaluate the quality of data based on its provenance. Our proposal is validated experimentally by a prototype that takes advantage of the Taverna workflow system.},
author = {Joana E. Gonzales Malaverri and André Santanchè and Claudia Bauzer Medeiros},
date = {2014-02-01},
journal = {International Journal of Metadata, Semantics and Ontologies},
keyword = {Data quality},
link = {http://dl.acm.org/citation.cfm?id=2579580},
number = {1},
pages = {15-18},
publisher = {Inderscience},
title = {A provenance-based approach to evaluate data quality in eScience},
volume = {9},
year = {2014}
}
Data quality is growing in relevance as a research topic. Quality assessment has been progressively incorporated in many business environments, and in software engineering practices. eScience environments, however, because of the multiplicity and heterogeneity of data sources and scientific experts involved in a given problem, complicate data quality assessment. This paper deals with the evaluation of the quality of data managed by eScience applications. Our approach is based on data provenance, i.e. the history of the origins and transformations applied to a given data product. Our contributions include a the specification of a framework to track data provenance and use it to derive quality information, b a model for data provenance based on the Open Provenance Model, and c a methodology to evaluate the quality of data based on its provenance. Our proposal is validated experimentally by a prototype that takes advantage of the Taverna workflow system.
|
2013 |
Miranda, Eduardo de Paula
Linked biology — from phenotypes towardsphylogenetic trees (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Miranda2013,
abstract = {A large number of studies in biology, including those involving phylogenetic trees reconstruction, result in the production of a huge amount of data -- e.g., phenotype descriptions, morphological data matrices, phylogenetic trees, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge crossing and comparing several pieces of information, not always linked and integrated. In this work, we are interested in a specific biology context, in which biologists apply computational tools to build and share digital descriptions of living beings. We propose a process that departs from fragmentary data sources, which we map to graphs, towards a full integration of descriptions through ontologies. Graph databases mediate this evolvement process. They are less schema dependent and, since an ontology is also a graph, the mapping process from the initial graph towards an ontology becomes a sequence of graph transformations. Our motivation stems from the idea that transforming phenotypical descriptions in a network of relationships and looking for links among related elements will enhance the ability of solving more complex problems supported by machines. This work details the design principles behind our process and two practical implementations as proof of concept.},
author = {Eduardo de Paula Miranda},
date = {2013-11-22},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/Eduardo-Miranda-M.-Sc.-Dissertation.pdf},
school = {Instituto de Computação - Unicamp},
title = {Linked biology — from phenotypes towardsphylogenetic trees},
year = {2013}
}
A large number of studies in biology, including those involving phylogenetic trees reconstruction, result in the production of a huge amount of data -- e.g., phenotype descriptions, morphological data matrices, phylogenetic trees, etc. Biologists increasingly face a challenge and opportunity of effectively discovering useful knowledge crossing and comparing several pieces of information, not always linked and integrated. In this work, we are interested in a specific biology context, in which biologists apply computational tools to build and share digital descriptions of living beings. We propose a process that departs from fragmentary data sources, which we map to graphs, towards a full integration of descriptions through ontologies. Graph databases mediate this evolvement process. They are less schema dependent and, since an ontology is also a graph, the mapping process from the initial graph towards an ontology becomes a sequence of graph transformations. Our motivation stems from the idea that transforming phenotypical descriptions in a network of relationships and looking for links among related elements will enhance the ability of solving more complex problems supported by machines. This work details the design principles behind our process and two practical implementations as proof of concept.
|
Miranda, Eduardo;
Grand, Anaıs;
Lebbe, Régine Vignes;
Santanchè, André
Coupling phenotype descriptions and phylogenetic trees: from SDD to ontologies via graph databases (conference)
TDWG 2013 Annual Conference,
Florença, Italy,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Miranda2013b,
abstract = {Characters are at the heart of the taxonomist’s tasks: discovering, describing, naming, comparing, characterizing new taxa, classifying them according to their phylogenetic relationships and studying their history, diversity and distribution. Taxonomic works result in the production of a huge amount of data (e.g., phenotype descriptions, morphological data matrices, etc.) stated in free-text format and digitally represented in many semi-structured standards, not often able to be interconnected. However, a semantic framework is needed for the integration of characters across studies, wherein ontologies are one of the promising choices to address this challenge. We face two challenges in this context: (i) how to relate several unconnected ontologies to be used in ontology-based descriptions; and (ii) how to map/reuse the huge amount of existing resources developed pre-ontologies. To address (i), we present a semantic representation of characters, with a unifying meta-model that can be superimposed over existing bio-ontologies, disciplining their relations and favoring their integration. Given the fact that converting taxonomic data in ontologies is not a straightforward task, to address (ii) we are implementing an intermediate step between semi-structured phenotypic descriptions and ontologies, based on graph databases. In the Semantic Web context, an ontology in RDF (Resource Description Framework) /OWL (Ontology Web Language) is essentially a graph where the nodes and relations are objects and properties following some class model. Texts and labels in natural language will appear as complementary documentation for human consumption. We mapped the SDD (Structured Descriptive Data) format to the graph model, remodeling semi-structured descriptions to a graph abstraction, in which the data are linked, enabling coupling phylogenetic trees and phenotype descriptions. Graph databases are less schema dependent and, since an ontology is also a graph, the mapping from the original graph towards an ontology becomes a sequence of graph transformations. This graph model was designed to be published on the Web in a Linked Data approach. Practical experiments are illustrated with the study of fossil ferns, using the programs Xper2 (for descriptions), which is compatible with the SDD standards, and LisBeth (for phylogenetics).},
address = {Florença, Italy},
author = {Eduardo Miranda and Anaıs Grand and Régine Vignes Lebbe and André Santanchè},
date = {2013-11-01},
keyword = {Conference},
link = {https://mbgserv18.mobot.org/ocs/index.php/tdwg/2013/paper/view/404},
publisher = {TDWG 2013 Annual Conference},
title = {Coupling phenotype descriptions and phylogenetic trees: from SDD to ontologies via graph databases},
year = {2013}
}
Characters are at the heart of the taxonomist’s tasks: discovering, describing, naming, comparing, characterizing new taxa, classifying them according to their phylogenetic relationships and studying their history, diversity and distribution. Taxonomic works result in the production of a huge amount of data (e.g., phenotype descriptions, morphological data matrices, etc.) stated in free-text format and digitally represented in many semi-structured standards, not often able to be interconnected. However, a semantic framework is needed for the integration of characters across studies, wherein ontologies are one of the promising choices to address this challenge. We face two challenges in this context: (i) how to relate several unconnected ontologies to be used in ontology-based descriptions; and (ii) how to map/reuse the huge amount of existing resources developed pre-ontologies. To address (i), we present a semantic representation of characters, with a unifying meta-model that can be superimposed over existing bio-ontologies, disciplining their relations and favoring their integration. Given the fact that converting taxonomic data in ontologies is not a straightforward task, to address (ii) we are implementing an intermediate step between semi-structured phenotypic descriptions and ontologies, based on graph databases. In the Semantic Web context, an ontology in RDF (Resource Description Framework) /OWL (Ontology Web Language) is essentially a graph where the nodes and relations are objects and properties following some class model. Texts and labels in natural language will appear as complementary documentation for human consumption. We mapped the SDD (Structured Descriptive Data) format to the graph model, remodeling semi-structured descriptions to a graph abstraction, in which the data are linked, enabling coupling phylogenetic trees and phenotype descriptions. Graph databases are less schema dependent and, since an ontology is also a graph, the mapping from the original graph towards an ontology becomes a sequence of graph transformations. This graph model was designed to be published on the Web in a Linked Data approach. Practical experiments are illustrated with the study of fossil ferns, using the programs Xper2 (for descriptions), which is compatible with the SDD standards, and LisBeth (for phylogenetics).
|
Cugler, Daniel Cintra;
Medeiros, Claudia Bauzer;
Shekhar, Shashi;
Toledo, Luís Felipe
A Geographical Approach for Metadata Quality Improvement in Biological Observation Databases (conference)
9th IEEE International Conference on e-Science,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Cugler2013,
abstract = {This paper addresses the problem of improving the quality of metadata in biological observation databases, in particular those associated with observations of living beings, and which are often used as a starting point for biodiversity analyses. Poor quality metadata lead to incorrect scientific conclusions, and can mislead experts in their analyses. Thus, it is important to design and develop methods to detect and correct metadata quality problems. This is a challenging problem because of the variety of issues concerning such metadata, e.g., misnaming of species, location uncertainty and imprecision concerning where observations were recorded. Related work is limited because it does not adequately model such issues. We propose a geographic approach based on expert-led classification of place and/or range mismatch anomalies detected by our algorithms. Our work is tested using a case study with the Fonoteca Neotropical Jacques Vielliard, one of the 10 largest animal sound collections in the world.},
author = {Daniel Cintra Cugler and Claudia Bauzer Medeiros and Shashi Shekhar and Luís Felipe Toledo},
booktitle = {9th IEEE International Conference on e-Science},
date = {2013-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/escience.pdf},
title = {A Geographical Approach for Metadata Quality Improvement in Biological Observation Databases},
year = {2013}
}
This paper addresses the problem of improving the quality of metadata in biological observation databases, in particular those associated with observations of living beings, and which are often used as a starting point for biodiversity analyses. Poor quality metadata lead to incorrect scientific conclusions, and can mislead experts in their analyses. Thus, it is important to design and develop methods to detect and correct metadata quality problems. This is a challenging problem because of the variety of issues concerning such metadata, e.g., misnaming of species, location uncertainty and imprecision concerning where observations were recorded. Related work is limited because it does not adequately model such issues. We propose a geographic approach based on expert-led classification of place and/or range mismatch anomalies detected by our algorithms. Our work is tested using a case study with the Fonoteca Neotropical Jacques Vielliard, one of the 10 largest animal sound collections in the world.
|
Miranda, Eduardo;
Santanchè, André
Unifying Phenotypes to Support Semantic Descriptions (conference)
6th Brazilian Conference on Ontological Research,
Belo Horizonte, Brazil,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Miranda2013b,
abstract = {In life sciences, there are several biological datasets shared through the web. All this abundance of data carries a great opportunity to explore complex relationships among the diversity of species. However, their physical format varies from independent data files to databases, which are heterogeneous in model and representation, hampering their integration. Ontologies are one of the promising choices to address this challenge. However, the existing digital phenotypic descriptions are stored in semi-structured formats, making extensive use of natural language. If on one hand, this patrimony is highly relevant, on the other hand, converting it in ontologies is not a straightforward task. The present article addresses this problem adding an intermediate step between semi-structured phenotypic descriptions and ontologies. It remodels semi-structured descriptions to a graph abstraction in which the data are linked. Graph transformations subsidize the transition from semi-structured data representation to a more formalized representation through ontologies.},
address = {Belo Horizonte, Brazil},
author = {Eduardo Miranda and André Santanchè},
date = {2013-09-22},
keyword = {Conference},
link = {http://ceur-ws.org/Vol-1041/ontobras-2013_paper50.pdf},
pages = {12},
publisher = {6th Brazilian Conference on Ontological Research},
title = {Unifying Phenotypes to Support Semantic Descriptions},
year = {2013}
}
In life sciences, there are several biological datasets shared through the web. All this abundance of data carries a great opportunity to explore complex relationships among the diversity of species. However, their physical format varies from independent data files to databases, which are heterogeneous in model and representation, hampering their integration. Ontologies are one of the promising choices to address this challenge. However, the existing digital phenotypic descriptions are stored in semi-structured formats, making extensive use of natural language. If on one hand, this patrimony is highly relevant, on the other hand, converting it in ontologies is not a straightforward task. The present article addresses this problem adding an intermediate step between semi-structured phenotypic descriptions and ontologies. It remodels semi-structured descriptions to a graph abstraction in which the data are linked. Graph transformations subsidize the transition from semi-structured data representation to a more formalized representation through ontologies.
|
Koga, Ivo Kenji
An Event-Based Approach to Process Environmental Data (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2013.
(
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Koga2013,
author = {Ivo Kenji Koga},
date = {2013-09-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2013-10-13-TESE-IvoKoga-v2.4.pdf},
school = {Instituto de Computação - Unicamp},
title = {An Event-Based Approach to Process Environmental Data},
year = {2013}
}
|
Jensen, R.;
Cruz, M.;
Gomes-Jr, L.;
Lopes, M.
Attributing fuzzy values to nursing diagnoses and their elements: the specialists opinion. (article)
International Journal of Nursing Knowledge,
2013.
(
Links |
BibTeX |
Tags:
Journal Paper
)
@article{Jensen2013,
author = {R. Jensen and M. Cruz and L. Gomes-Jr and M. Lopes},
date = {2013-07-01},
journal = {International Journal of Nursing Knowledge},
keyword = {Journal Paper},
link = {http://onlinelibrary.wiley.com/doi/10.1111/j.2047-3095.2013.01242.x/abstract;jsessionid=07EC6F2CE5BB23E7FD42179189538132.d02t04},
title = {Attributing fuzzy values to nursing diagnoses and their elements: the specialists opinion.},
year = {2013}
}
|
Jensen, L. Gomes-Jr, R.;
Santanche, A.
Query-based inferences in the Complex Data Management System (conference)
SLG/ICML 2013,
2013.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Gomes-Jr2013b,
author = {L. Gomes-Jr, R. Jensen and A. Santanche},
booktitle = {SLG/ICML 2013},
date = {2013-07-01},
keyword = {Conference},
link = {http://www.ic.unicamp.br/~ra041475/docs/gomes-jr_et_al-slg-2013.pdf},
title = {Query-based inferences in the Complex Data Management System},
year = {2013}
}
|
Gomes, Alessandra da Silva
Web metalaboratory (mastersthesis)
Instituto de Computação - Universidade Estadual de Campinas (UNICAMP),
Campinas - SP,
mastersthesis,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Distance education, Educational technology, Environmental laboratories, Web laboratories
)
@mastersthesis{Gomes2013,
abstract = {The amount of scientific data, services and on-line tools available on the Web offer an unprecedented opportunity to conceive new kinds of laboratories blending resources. Existing experimental and collected data can substantiate asynchronous laboratories. Combined with mashup enabled software, it is possible to produce hybrid laboratories to confront, for example, synthetic simulations with observations. This work addresses this opportunity in the Education context through our metalaboratory, an authoring environment to produce laboratories by combining building blocks encapsulated in components. We introduce here the lab composition patterns and the active Web templates as fundamental mechanisms to support a lab authoring task. These laboratories can be embedded and mashed-up in Web documents. This work shows practical experiments of producing Web virtual and hybrid laboratories.},
address = {Campinas - SP},
author = {Alessandra da Silva Gomes},
date = {2013-06-28},
keyword = {Distance education, Educational technology, Environmental laboratories, Web laboratories},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/GomesAlessandradaSilva_M.pdf},
school = {Instituto de Computação - Universidade Estadual de Campinas (UNICAMP)},
title = {Web metalaboratory},
year = {2013}
}
The amount of scientific data, services and on-line tools available on the Web offer an unprecedented opportunity to conceive new kinds of laboratories blending resources. Existing experimental and collected data can substantiate asynchronous laboratories. Combined with mashup enabled software, it is possible to produce hybrid laboratories to confront, for example, synthetic simulations with observations. This work addresses this opportunity in the Education context through our metalaboratory, an authoring environment to produce laboratories by combining building blocks encapsulated in components. We introduce here the lab composition patterns and the active Web templates as fundamental mechanisms to support a lab authoring task. These laboratories can be embedded and mashed-up in Web documents. This work shows practical experiments of producing Web virtual and hybrid laboratories.
|
Silva, Felipe Henriques da
Serial Annotator : managing annotations of time series (mastersthesis)
Universidade Estadual de Campinas,
mastersthesis,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
time series
)
@mastersthesis{felipesilva,
abstract = {Time series are sequences of values measured at successive time instants. They are used in several domains such as agriculture, medicine and economics. The analysis of these series is of utmost importance, providing experts the ability to identify trends and forecast possible scenarios. In order to facilitate their analyses, experts often associate annotations with time series. Such annotations can also be used to correlate distinct series, or look for specific series in a database. There are many challenges involved in managing annotations - from finding proper structures to associate them with series, to organizing and retrieving series based on annotations. This work contributes to the work in management of time series. Its main contributions are the design and development of a framework for the management of multiple annotations associated with one or multiple time series in a database. The framework also provides means for annotation versioning, so that previous states of an annotation are never lost. Serial Annotator is an application implemented for the Android smart phone platform. It has been used to validate the proposed framework and has been tested with real data involving agriculture problems.},
author = {Felipe Henriques da Silva},
date = {2013-06-10},
keyword = {time series},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/08/SilvaFelipeHenriquesda_M.pdf},
school = {Universidade Estadual de Campinas},
title = {Serial Annotator : managing annotations of time series},
year = {2013}
}
Time series are sequences of values measured at successive time instants. They are used in several domains such as agriculture, medicine and economics. The analysis of these series is of utmost importance, providing experts the ability to identify trends and forecast possible scenarios. In order to facilitate their analyses, experts often associate annotations with time series. Such annotations can also be used to correlate distinct series, or look for specific series in a database. There are many challenges involved in managing annotations - from finding proper structures to associate them with series, to organizing and retrieving series based on annotations. This work contributes to the work in management of time series. Its main contributions are the design and development of a framework for the management of multiple annotations associated with one or multiple time series in a database. The framework also provides means for annotation versioning, so that previous states of an annotation are never lost. Serial Annotator is an application implemented for the Android smart phone platform. It has been used to validate the proposed framework and has been tested with real data involving agriculture problems.
|
Malaverri, Joana Esther Gonzales
Supporting data quality assessment in eScience: a provenance based approach (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Malaverri2013,
abstract = {Data quality is a recurrent concern in all scientific domains. Experiments analyze and manipulate several kinds of datasets, and generate data to be (re)used by other experiments. The basis for obtaining good scientific results is highly associated with the degree of quality of such datasets. However, data involved with the experiments are manipulated by a wide range of users, with distinct research interests, using their own vocabularies, work methodologies, models, and sampling needs. Given this scenario, a challenge in computer science is to come up with solutions that help scientists to assess the quality of their data. Different efforts have been proposed addressing the estimation of quality. Some of these efforts outline that data provenance attributes should be used to evaluate quality. However, most of these initiatives address the evaluation of a specific quality attribute, frequently focusing on atomic data values, thereby reducing the applicability of these approaches. Taking this scenario into account, there is a need for new solutions that scientists can adopt to assess how good their data are. In this PhD research, we present an approach to attack this problem based on the notion of data provenance. Unlike other similar approaches, our proposal combines quality attributes specified within a context by pecialists and metadata on the provenance of a data set. The main contributions of this work are: (i) the specification of a framework that takes advantage of data provenance to derive quality information; (ii) a methodology associated with this framework that outlines the procedures to support the assessment of quality; (iii) the proposal of two different provenance models to capture provenance information, for fixed and extensible scenarios; and (iv) validation of items (i) through (iii), with their discussion via case studies in agriculture and biodiversity.},
author = {Joana Esther Gonzales Malaverri},
date = {2013-05-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/thesisJoana.pdf},
school = {Instituto de Computação - Unicamp},
title = {Supporting data quality assessment in eScience: a provenance based approach},
year = {2013}
}
Data quality is a recurrent concern in all scientific domains. Experiments analyze and manipulate several kinds of datasets, and generate data to be (re)used by other experiments. The basis for obtaining good scientific results is highly associated with the degree of quality of such datasets. However, data involved with the experiments are manipulated by a wide range of users, with distinct research interests, using their own vocabularies, work methodologies, models, and sampling needs. Given this scenario, a challenge in computer science is to come up with solutions that help scientists to assess the quality of their data. Different efforts have been proposed addressing the estimation of quality. Some of these efforts outline that data provenance attributes should be used to evaluate quality. However, most of these initiatives address the evaluation of a specific quality attribute, frequently focusing on atomic data values, thereby reducing the applicability of these approaches. Taking this scenario into account, there is a need for new solutions that scientists can adopt to assess how good their data are. In this PhD research, we present an approach to attack this problem based on the notion of data provenance. Unlike other similar approaches, our proposal combines quality attributes specified within a context by pecialists and metadata on the provenance of a data set. The main contributions of this work are: (i) the specification of a framework that takes advantage of data provenance to derive quality information; (ii) a methodology associated with this framework that outlines the procedures to support the assessment of quality; (iii) the proposal of two different provenance models to capture provenance information, for fixed and extensible scenarios; and (iv) validation of items (i) through (iii), with their discussion via case studies in agriculture and biodiversity.
|
Longo, João Sávio Ceregatti
Management of integrity constraints for multi-scale geospatial data (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2013.
(
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Longo2013,
author = {João Sávio Ceregatti Longo},
date = {2013-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/master_thesis_final.pdf},
school = {Instituto de Computação - Unicamp},
title = {Management of integrity constraints for multi-scale geospatial data},
year = {2013}
}
|
Gomes-Jr, Luiz;
Jensen, Rodrigo;
Santanchè, André
Towards query model integration: topology-aware, IR-inspired metrics for declarative graph querying (conference)
Second International Workshop on Querying Graph Structured Data,
2013.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Gomes-Jr2013,
abstract = {Accompanying the growth of the internet and the consequent diversification of applications and data processing needs, there has been a rapid proliferation of data and query models. While graph models such as RDF have been successfully used to integrate data from diverse origins, interaction with the integrated data is still limited by inflexible query models that cannot express concepts from multiple paradigms. In this paper we analyze data and query models typical of modern data-driven applications. We then propose an integrated query model aimed at covering a broad range of applications, allowing expressive queries that capture elements from diverse data models and querying paradigms. We employ graphs models to integrate data from structured and unstructured sources. We also reinterpret as graph analysis tasks several ranking metrics typical of information retrieval (IR) systems. The metrics allow flexible correlation of data elements based on topological properties of the underlying graph. The new query model is materialized in a query language named in* (in star). We present experiments with real data that demonstrate the expressiveness and practicability of our approach.},
author = {Luiz Gomes-Jr and Rodrigo Jensen and André Santanchè},
booktitle = {Second International Workshop on Querying Graph Structured Data},
date = {2013-03-01},
keyword = {Conference},
title = {Towards query model integration: topology-aware, IR-inspired metrics for declarative graph querying},
year = {2013}
}
Accompanying the growth of the internet and the consequent diversification of applications and data processing needs, there has been a rapid proliferation of data and query models. While graph models such as RDF have been successfully used to integrate data from diverse origins, interaction with the integrated data is still limited by inflexible query models that cannot express concepts from multiple paradigms. In this paper we analyze data and query models typical of modern data-driven applications. We then propose an integrated query model aimed at covering a broad range of applications, allowing expressive queries that capture elements from diverse data models and querying paradigms. We employ graphs models to integrate data from structured and unstructured sources. We also reinterpret as graph analysis tasks several ranking metrics typical of information retrieval (IR) systems. The metrics allow flexible correlation of data elements based on topological properties of the underlying graph. The new query model is materialized in a query language named in* (in star). We present experiments with real data that demonstrate the expressiveness and practicability of our approach.
|
Vilar, Bruno S. C. M.;
Medeiros, Claudia Bauzer;
Santanchè, André
Towards Adapting Scientific Workflow Systems to Healthcare Planning (conference)
HEALTHINF - International Conference on Health Informatics,
2013.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Vilar2013,
author = {Bruno S. C. M. Vilar and Claudia Bauzer Medeiros and André Santanchè},
booktitle = {HEALTHINF - International Conference on Health Informatics},
date = {2013-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/HealthInf-Biostec-2013.pdf},
title = {Towards Adapting Scientific Workflow Systems to Healthcare Planning},
year = {2013}
}
|
Malaverri, J. E. G.;
Mota, M. S.;
Medeiros, C. B.
Estimating the quality of data using provenance: a case study in eScience (conference)
19th Americas Conference on Information Systems (AMCIS),
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2013c,
abstract = {Data quality assessment is a key factor in data-intensive domains. The data deluge is aggravated by an increasing need for interoperability and cooperation across groups and organizations. New alternatives must be found to select the data that best satisfy users’ needs in a given context. This paper presents a strategy to provide information to support the evaluation of the quality of data sets. This strategy is based on combining metadata on the provenance of a data set (derived from workflows that generate it) and quality dimensions defined by the set’s users, based on the desired context of use. Our solution, validated via a case study, takes advantage of a semantic model to preserve data provenance related to applications in a specific domain.},
author = {J. E. G. Malaverri and M. S. Mota and C. B. Medeiros},
booktitle = {19th Americas Conference on Information Systems (AMCIS)},
date = {2013-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/AMCIS2013_Paper_ProvToQual.pdf},
title = {Estimating the quality of data using provenance: a case study in eScience},
year = {2013}
}
Data quality assessment is a key factor in data-intensive domains. The data deluge is aggravated by an increasing need for interoperability and cooperation across groups and organizations. New alternatives must be found to select the data that best satisfy users’ needs in a given context. This paper presents a strategy to provide information to support the evaluation of the quality of data sets. This strategy is based on combining metadata on the provenance of a data set (derived from workflows that generate it) and quality dimensions defined by the set’s users, based on the desired context of use. Our solution, validated via a case study, takes advantage of a semantic model to preserve data provenance related to applications in a specific domain.
|
Malaverri, Joana E. Gonzales;
Santanchè, André;
Medeiros, Claudia Bauzer
A Provenance-based Approach to Evaluate Data Quality in eScience (article)
Int. J. Metadata, Semantics and Ontologies,
2013.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Malaverri2013,
abstract = {Data quality is growing in relevance as a research topic. This is becoming increasingly crucial in data-intensive domains, e.g., stock market and financial studies, eHealth, or environmental research. Indeed, the data deluge characteristic of eScience applications has brought about new concerns along this direction. Quality assessment methods and models have been progressively incorporated in many business environments, as well as in software engineering practices. eScience environments, however, because of the many data source providers, kinds of scientific expertise needed, and multiple timeand- space scales involved in a given problem make it diffcult to assess data quality. This paper is concerned with the evaluation of the quality of data managed by eScience applications. Our approach is based on data provenance, i.e. the history of the origins and transformation processes applied to a given data product. Our contributions include: (i) the specification of a framework to track data provenance and use this information to derive quality information; (ii) a model for data provenance based on the Open Provenance Model; and (iii) a methodology to evaluate the quality of some digital artifact based on its provenance. Our proposal is validated experimentally by a prototype we developed that takes advantage of the Taverna work ow system.},
author = {Joana E. Gonzales Malaverri and André Santanchè and Claudia Bauzer Medeiros},
date = {2013-01-01},
journal = {Int. J. Metadata, Semantics and Ontologies},
keyword = {Article},
title = {A Provenance-based Approach to Evaluate Data Quality in eScience},
year = {2013}
}
Data quality is growing in relevance as a research topic. This is becoming increasingly crucial in data-intensive domains, e.g., stock market and financial studies, eHealth, or environmental research. Indeed, the data deluge characteristic of eScience applications has brought about new concerns along this direction. Quality assessment methods and models have been progressively incorporated in many business environments, as well as in software engineering practices. eScience environments, however, because of the many data source providers, kinds of scientific expertise needed, and multiple timeand- space scales involved in a given problem make it diffcult to assess data quality. This paper is concerned with the evaluation of the quality of data managed by eScience applications. Our approach is based on data provenance, i.e. the history of the origins and transformation processes applied to a given data product. Our contributions include: (i) the specification of a framework to track data provenance and use this information to derive quality information; (ii) a model for data provenance based on the Open Provenance Model; and (iii) a methodology to evaluate the quality of some digital artifact based on its provenance. Our proposal is validated experimentally by a prototype we developed that takes advantage of the Taverna work ow system.
|
Longo, João Sávio Ceregatti;
Medeiros, Claudia Bauzer
Providing multi-scale consistency for multi-scale geospatial data (conference)
25th International Conference on Scientific and Statistical Database Management (SSDBM),
2013.
(
BibTeX |
Tags:
Conference
)
@conference{Longo2013b,
author = {João Sávio Ceregatti Longo and Claudia Bauzer Medeiros},
booktitle = {25th International Conference on Scientific and Statistical Database Management (SSDBM)},
date = {2013-01-01},
keyword = {Conference},
note = {Accepted},
pages = {12},
title = {Providing multi-scale consistency for multi-scale geospatial data},
year = {2013}
}
|
R., BERNARDO, I.;
S., MOTA, M.;
A., SANTANCHE,
Extracting and Semantically Integrating Implicit Schemas from Multiple Spreadsheets of Biology based on the Recognition of their Nature (article)
Journal of Information and Data Management - JIDM,
2,
2013.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{BERNARDO2013,
abstract = {Spreadsheets are popular among users and organizations, becoming an essential data management tool. The easiness to handle spreadsheets associated with the creative freedom resulted in an increase in the volume of data available in this format. However, spreadsheets are not conceived to integrate data from distinct sources and challenges arise involving systematization of processes to reuse and combine their data. Many related initiatives address the problem of integrating data inside spreadsheets, focusing on lexical and syntactical aspects. However, the proper exploitation of the semantics related to this data is still an opportunity. In this sense, some related work propose mapping spreadsheets contents to open interoperability standards, mainly Semantic Web standards. The main limitation of such proposals is the assumption that it is possible to recognize and make explicit the schema and the semantics of spreadsheets automatically, regardless of their domain. This work differs from related work by assuming the essential role of the context – mainly the domain in which the spreadsheet was conceived – to delineate shared practices of the biology community, which establishes building patterns to be automatically recognized by our system, in a data extraction process and schema recognition. In this article, we present the result of a practical experiment involving such a system, in which we integrate hundreds of spreadsheets belonging to the biology domain and available on the Web. This integration was possible due to observation that the recognition of a spreadsheet nature can be achieved from its tabular organization.},
author = {BERNARDO, I. R. and MOTA, M. S. and SANTANCHE, A.},
date = {2013-01-01},
journal = {Journal of Information and Data Management - JIDM},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/220-1058-1-PB.pdf},
number = {2},
pages = {104-114},
title = {Extracting and Semantically Integrating Implicit Schemas from Multiple Spreadsheets of Biology based on the Recognition of their Nature},
volume = {4},
year = {2013}
}
Spreadsheets are popular among users and organizations, becoming an essential data management tool. The easiness to handle spreadsheets associated with the creative freedom resulted in an increase in the volume of data available in this format. However, spreadsheets are not conceived to integrate data from distinct sources and challenges arise involving systematization of processes to reuse and combine their data. Many related initiatives address the problem of integrating data inside spreadsheets, focusing on lexical and syntactical aspects. However, the proper exploitation of the semantics related to this data is still an opportunity. In this sense, some related work propose mapping spreadsheets contents to open interoperability standards, mainly Semantic Web standards. The main limitation of such proposals is the assumption that it is possible to recognize and make explicit the schema and the semantics of spreadsheets automatically, regardless of their domain. This work differs from related work by assuming the essential role of the context – mainly the domain in which the spreadsheet was conceived – to delineate shared practices of the biology community, which establishes building patterns to be automatically recognized by our system, in a data extraction process and schema recognition. In this article, we present the result of a practical experiment involving such a system, in which we integrate hundreds of spreadsheets belonging to the biology domain and available on the Web. This integration was possible due to observation that the recognition of a spreadsheet nature can be achieved from its tabular organization.
|
2012 |
Senra, Rodrigo Dias de Arruda
Organization is sharing : from eScience to personal information management (phdthesis)
Institute of Computing, UNICAMP,
phdthesis,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Personal Information Management
)
@phdthesis{senra2012,
abstract = {Information sharing has always been a key issue in any kind of joint effort. Paradoxically, with the data deluge, the more information available, the harder it is to design and implement solutions that effectively foster such sharing. This thesis analyzes distinct aspects of sharing - from eScience-related environments to personal information. As a result of this analysis, it provides answers to some of the problems encountered, along three axes. The first, SciFrame, is a specific framework that describes systems or processes involving scientific digital data manipulation, serving as a descriptive pattern to help system comparison. The adoption of SciFrame to describe distinct scientific virtual environments allows identifying commonalities and points for interoperation. The second axe contribution addresses the specific problem of communication between arbitrary systems and services provided by distinct database platforms, via the use of the so-called database descriptors or DBDs. These descriptors contribute to provide independence between applications and the services, thereby enhancing sharing across applications and databases. The third contribution, Organographs, provides means to deal with multifaceted information organization. It addresses problems of sharing personal information by means of exploiting the way we organize such information. Here, rather than trying to provide means to share the information itself, the unit of sharing is the organization of the information. By designing and sharing organographs, distinct groups provide each other dynamic, reconfigurable views of how information is organized, thereby promoting interoperability and reuse. Organographs are an innovative approach to hierarchical data management. These three contributions are centered on the basic idea of building and sharing hierarchical organizations. Part of these contributions was validated by case studies and, in the case of organographs, an actual implementation.},
author = {Rodrigo Dias de Arruda Senra},
date = {2012-12-10},
keyword = {Personal Information Management},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/04/SenraRodrigoDiasArruda_D3.pdf},
note = {Supervisor Claudia Bauzer Medeiros},
school = {Institute of Computing, UNICAMP},
title = {Organization is sharing : from eScience to personal information management},
year = {2012}
}
Information sharing has always been a key issue in any kind of joint effort. Paradoxically, with the data deluge, the more information available, the harder it is to design and implement solutions that effectively foster such sharing. This thesis analyzes distinct aspects of sharing - from eScience-related environments to personal information. As a result of this analysis, it provides answers to some of the problems encountered, along three axes. The first, SciFrame, is a specific framework that describes systems or processes involving scientific digital data manipulation, serving as a descriptive pattern to help system comparison. The adoption of SciFrame to describe distinct scientific virtual environments allows identifying commonalities and points for interoperation. The second axe contribution addresses the specific problem of communication between arbitrary systems and services provided by distinct database platforms, via the use of the so-called database descriptors or DBDs. These descriptors contribute to provide independence between applications and the services, thereby enhancing sharing across applications and databases. The third contribution, Organographs, provides means to deal with multifaceted information organization. It addresses problems of sharing personal information by means of exploiting the way we organize such information. Here, rather than trying to provide means to share the information itself, the unit of sharing is the organization of the information. By designing and sharing organographs, distinct groups provide each other dynamic, reconfigurable views of how information is organized, thereby promoting interoperability and reuse. Organographs are an innovative approach to hierarchical data management. These three contributions are centered on the basic idea of building and sharing hierarchical organizations. Part of these contributions was validated by case studies and, in the case of organographs, an actual implementation.
|
Santanchè, André;
Medeiros, Claudia Bauzer;
Jomier, Genevieve;
Zam, Michel
Challenges of the Anthropocene epoch - supporting multi-focus research (conference)
Proceeding of XIII Brazilian Symposium on Geoinformatics - GeoInfo,
2012.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Santanche2012,
author = {André Santanchè and Claudia Bauzer Medeiros and Genevieve Jomier and Michel Zam},
booktitle = {Proceeding of XIII Brazilian Symposium on Geoinformatics - GeoInfo},
date = {2012-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/santancheetal2012-v02.pdf},
pages = {1-10},
title = {Challenges of the Anthropocene epoch - supporting multi-focus research},
year = {2012}
}
|
G., Malaverri Joana E.;
B., Medeiros Claudia
Data Quality in Agriculture Applications (conference)
XIII Brazilian Symposium on GeoInformatics - GeoInfo,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2012,
abstract = {Data quality is a common concern in a wide range of domains. Since agriculture plays an important role in the Brazilian economy, it is crucial that the data be useful and with a proper level of quality for the decision making process, planning activities, among others. Nevertheless, this requirement is not often taken into account when different systems and databases are modeled. This work presents a review about data quality issues covering some efforts in agriculture and geospatial science to tackle these issues. The goal is to help researchers and practitioners to design better applications. In particular, we focus on the different dimensions of quality and the approaches that are used to measure them.},
author = {Malaverri Joana E. G. and Medeiros Claudia B.},
booktitle = {XIII Brazilian Symposium on GeoInformatics - GeoInfo},
date = {2012-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/geoinfoJoana2012.pdf},
title = {Data Quality in Agriculture Applications},
year = {2012}
}
Data quality is a common concern in a wide range of domains. Since agriculture plays an important role in the Brazilian economy, it is crucial that the data be useful and with a proper level of quality for the decision making process, planning activities, among others. Nevertheless, this requirement is not often taken into account when different systems and databases are modeled. This work presents a review about data quality issues covering some efforts in agriculture and geospatial science to tackle these issues. The goal is to help researchers and practitioners to design better applications. In particular, we focus on the different dimensions of quality and the approaches that are used to measure them.
|
Bernardo, Ivelize Rocha;
Mota, Matheus Silva;
Santanchè, André
Extraindo e Integrando Semanticamente Dados de Múltiplas Planilhas Eletrônicas a Partir do Reconhecimento de Sua Natureza (conference)
Simpósio Brasileiro de Banco de Dados (SBBD),
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Bernardo2012b,
abstract = {Spreadsheets are popular among users and organizations, becoming an essential data management tool. The ease of accessing associated with the creative freedom o?ered by spreadsheets resulted in the increase of the data volume available in this format. However, spreadsheets are not conceived for integration of data from distinct sources and challenges arise involving systematization of processes to reuse and combine their data. Many related initiatives address integration of data inside spreadsheets focusing on lexical and syntactical aspects, however, the exploration of the semantics related to these data is still an open challenge. In this sense, some related work propose mapping spreadsheets contents to open interoperability standards, mainly Semantic Web standards. The main limitation of such proposals is the assumption that it is possible to recognize and make explicit the schema and the semantics of spreadsheets automatically regardless of their domain. This work di?ers from related work by assuming the essential role of the context ? mainly the domain in which the spreadsheet was conceived ? to delineate shared practices of the community, which establishes building standards to be automatically recognized by our system, in a data extraction process and schema recognition. In this paper we present a result of a practical experiment involving such a system, in which we integrated data from hundreds of spreadsheets available on the Web. This integration was possible due to a unique ability of our approach of recognizing the spreadsheet nature, analyzed inside its creation context.},
author = {Ivelize Rocha Bernardo and Matheus Silva Mota and André Santanchè},
booktitle = {Simpósio Brasileiro de Banco de Dados (SBBD)},
date = {2012-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SBBD2012.pdf},
title = {Extraindo e Integrando Semanticamente Dados de Múltiplas Planilhas Eletrônicas a Partir do Reconhecimento de Sua Natureza},
year = {2012}
}
Spreadsheets are popular among users and organizations, becoming an essential data management tool. The ease of accessing associated with the creative freedom o?ered by spreadsheets resulted in the increase of the data volume available in this format. However, spreadsheets are not conceived for integration of data from distinct sources and challenges arise involving systematization of processes to reuse and combine their data. Many related initiatives address integration of data inside spreadsheets focusing on lexical and syntactical aspects, however, the exploration of the semantics related to these data is still an open challenge. In this sense, some related work propose mapping spreadsheets contents to open interoperability standards, mainly Semantic Web standards. The main limitation of such proposals is the assumption that it is possible to recognize and make explicit the schema and the semantics of spreadsheets automatically regardless of their domain. This work di?ers from related work by assuming the essential role of the context ? mainly the domain in which the spreadsheet was conceived ? to delineate shared practices of the community, which establishes building standards to be automatically recognized by our system, in a data extraction process and schema recognition. In this paper we present a result of a practical experiment involving such a system, in which we integrated data from hundreds of spreadsheets available on the Web. This integration was possible due to a unique ability of our approach of recognizing the spreadsheet nature, analyzed inside its creation context.
|
Bernardo, Ivelize Rocha
Planilhas eletrônicas , Web semântica , Recuperação da informação , Biologia - Processamento de dados (mastersthesis)
Instituto de Computação - Universidade Estadual de Campinas (UNICAMP),
Campinas - SP,
mastersthesis,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Biologia - Processamento de dados, Planilhas eletrônicas, Recuperação da informação, Web semântica
)
@mastersthesis{bernardo2012b,
abstract = {A flexibilidade proporcionada por planilhas eletrônicas possibilita sua customização seguindo modelos mentais de seus autores e as tornam sistemas populares de gerenciamento de dados. Gradativamente tem crescido a necessidade de se integrar e articular dados de diferentes planilhas e, para que máquinas possam auxiliar neste processo, o desafio é como interpretar automaticamente o seu esquema implícito, que é dirigido à interpretação humana. Alguns trabalhos propõem o mapeamento do conteúdo das planilhas para padrões abertos de interoperabilidade, principalmente aqueles da Web Semântica. A principal limitação destes trabalhos consiste no pressuposto de que é possível reconhecer e explicitar os esquemas e a semântica das planilhas automaticamente, independentemente do seu domínio. Este trabalho se diferencia por considerar o contexto e o domínio em que foram concebidas as planilhas essenciais para se traçar o conjunto de práticas compartilhadas pela comunidade em questão, que estabelece padrões de construção a serem reconhecidos automaticamente por nosso sistema, em um processo de extração de dados e explicitação de esquemas. Nossa proposta envolve uma estratégia para caracterização de padrões de construção associados a modelos conceituais de autores na construção de planilhas, que é resultado de uma ampla pesquisa de práticas compartilhadas por autores de planilhas no domínio de uso da Biologia. Neste documento apresentamos o resultado de um experimento prático envolvendo tal sistema, no qual integramos os dados de centenas de planilhas eletrônicas disponíveis na Web. Tal integração foi possível pela capacidade única de nossa abordagem de reconhecer a natureza da planilha analisada dentro de seu contexto de criação.},
address = {Campinas - SP},
author = {Ivelize Rocha Bernardo},
date = {2012-09-04},
keyword = {Biologia - Processamento de dados, Planilhas eletrônicas, Recuperação da informação, Web semântica},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/BernardoIvelizeRocha_M.pdf},
school = {Instituto de Computação - Universidade Estadual de Campinas (UNICAMP)},
title = {Planilhas eletrônicas , Web semântica , Recuperação da informação , Biologia - Processamento de dados},
year = {2012}
}
A flexibilidade proporcionada por planilhas eletrônicas possibilita sua customização seguindo modelos mentais de seus autores e as tornam sistemas populares de gerenciamento de dados. Gradativamente tem crescido a necessidade de se integrar e articular dados de diferentes planilhas e, para que máquinas possam auxiliar neste processo, o desafio é como interpretar automaticamente o seu esquema implícito, que é dirigido à interpretação humana. Alguns trabalhos propõem o mapeamento do conteúdo das planilhas para padrões abertos de interoperabilidade, principalmente aqueles da Web Semântica. A principal limitação destes trabalhos consiste no pressuposto de que é possível reconhecer e explicitar os esquemas e a semântica das planilhas automaticamente, independentemente do seu domínio. Este trabalho se diferencia por considerar o contexto e o domínio em que foram concebidas as planilhas essenciais para se traçar o conjunto de práticas compartilhadas pela comunidade em questão, que estabelece padrões de construção a serem reconhecidos automaticamente por nosso sistema, em um processo de extração de dados e explicitação de esquemas. Nossa proposta envolve uma estratégia para caracterização de padrões de construção associados a modelos conceituais de autores na construção de planilhas, que é resultado de uma ampla pesquisa de práticas compartilhadas por autores de planilhas no domínio de uso da Biologia. Neste documento apresentamos o resultado de um experimento prático envolvendo tal sistema, no qual integramos os dados de centenas de planilhas eletrônicas disponíveis na Web. Tal integração foi possível pela capacidade única de nossa abordagem de reconhecer a natureza da planilha analisada dentro de seu contexto de criação.
|
Koga, Ivo;
Medeiros, Claudia Bauzer
Integrating and processing events from Heterogeneous Data Sources (conference)
Proceedings VI eScience Workshop - XXXII Brazilian Computer Society Conference,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Koga2012,
abstract = {Environmental monitoring studies present many challenges. A huge amount of data are provided in different formats from different sources (e.g. sensor networks and databases). This paper presents a framework we have developed to overcome some of these problems, based on combining aspects of Enterprise Service Bus (ESB) architectures and Event Processing mechanisms. First, we treat integration using ESB and then use event processing to transform, filter and detect event patterns, where all data arriving at a given point are treated uniformly as event streams. A case study concerning data streams of meteorological stations is provided to show the feasibility of this solution.},
author = {Ivo Koga and Claudia Bauzer Medeiros},
booktitle = {Proceedings VI eScience Workshop - XXXII Brazilian Computer Society Conference},
date = {2012-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CSBC-Workshop-eScience-Ivo-2012-06-06.pdf},
title = {Integrating and processing events from Heterogeneous Data Sources},
year = {2012}
}
Environmental monitoring studies present many challenges. A huge amount of data are provided in different formats from different sources (e.g. sensor networks and databases). This paper presents a framework we have developed to overcome some of these problems, based on combining aspects of Enterprise Service Bus (ESB) architectures and Event Processing mechanisms. First, we treat integration using ESB and then use event processing to transform, filter and detect event patterns, where all data arriving at a given point are treated uniformly as event streams. A case study concerning data streams of meteorological stations is provided to show the feasibility of this solution.
|
Cugler, Daniel Cintra;
Medeiros, Claudia Bauzer;
Toledo, Felipe
An architecture for retrieval of animal sound recordings based on context variables (article)
Concurrency and Computation - Practice and Experience,
2012.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Cugler2012,
abstract = {For decades, biologists around the world have recorded animal sounds. As the number of records grows, so does the difficulty to manage them, presenting challenges to save, retrieve, share and manage sounds. These challenges are complicated by the fact that animal sound recordings have specific peculiarities, associated to the context in which the sound was recorded. For example, sounds emitted by individuals that are in groups may be different from ones emitted by isolated individuals. Though these characteristics may be relevant to biologists, they are seldom explicit in the recording metadata. This paper discusses our ongoing research on management of sound recordings, considering factors such as environmental or social contexts, which are not treated by current systems. This work exploits retrieval based on context analysis. Query parameters include context variables that are dynamically derived using public services and ontologies associated with sound recording metadata. Part of the results have been validated through a web prototype, discussed in the text.},
author = {Daniel Cintra Cugler and Claudia Bauzer Medeiros and Felipe Toledo},
date = {2012-06-01},
journal = {Concurrency and Computation - Practice and Experience},
keyword = {Article},
title = {An architecture for retrieval of animal sound recordings based on context variables},
year = {2012}
}
For decades, biologists around the world have recorded animal sounds. As the number of records grows, so does the difficulty to manage them, presenting challenges to save, retrieve, share and manage sounds. These challenges are complicated by the fact that animal sound recordings have specific peculiarities, associated to the context in which the sound was recorded. For example, sounds emitted by individuals that are in groups may be different from ones emitted by isolated individuals. Though these characteristics may be relevant to biologists, they are seldom explicit in the recording metadata. This paper discusses our ongoing research on management of sound recordings, considering factors such as environmental or social contexts, which are not treated by current systems. This work exploits retrieval based on context analysis. Query parameters include context variables that are dynamically derived using public services and ontologies associated with sound recording metadata. Part of the results have been validated through a web prototype, discussed in the text.
|
Fedel, Gabriel de S.;
Medeiros, Claudia Bauzer;
Santos, Jefersson Alex dos
Sinimbu - Multimodal queries to support biodiversity studies (conference)
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS – ICCSA 2012,
LNCS,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{deFedel2012,
abstract = {Typical biodiversity information systems can only solve a small part of user concerns. Available query mechanisms are based on traditional textual database manipulations, combmining them with spatial correlations. However, experts need more complex computations – e.g., using non-textual data sources. This involves a considerable amount of manual tasks, to obtain the needed information. This paper presents the specification and implementation of Sinimbu – a framework to process multimodal queries that support both text and images as search parameters, for biodiversity studies, thus providing support for subsequent complex simulations. Sinimbu was validated with real data from our university’s Zoology Museum, which houses one of the largest zoological museum collections in Brazil. Not only can users interact with the system in several modes, but query possibilities (and answers) vary according to the user’s profile. Query processing in Sinimbu combines work in database management, image processing and ontology construction and management.},
author = {Gabriel de S. Fedel and Claudia Bauzer Medeiros and Jefersson Alex dos Santos},
booktitle = {COMPUTATIONAL SCIENCE AND ITS APPLICATIONS – ICCSA 2012},
date = {2012-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/fedel_ICCSA2012.pdf},
pages = {620-634},
publisher = {LNCS},
title = {Sinimbu - Multimodal queries to support biodiversity studies},
volume = {7333/2012},
year = {2012}
}
Typical biodiversity information systems can only solve a small part of user concerns. Available query mechanisms are based on traditional textual database manipulations, combmining them with spatial correlations. However, experts need more complex computations – e.g., using non-textual data sources. This involves a considerable amount of manual tasks, to obtain the needed information. This paper presents the specification and implementation of Sinimbu – a framework to process multimodal queries that support both text and images as search parameters, for biodiversity studies, thus providing support for subsequent complex simulations. Sinimbu was validated with real data from our university’s Zoology Museum, which houses one of the largest zoological museum collections in Brazil. Not only can users interact with the system in several modes, but query possibilities (and answers) vary according to the user’s profile. Query processing in Sinimbu combines work in database management, image processing and ontology construction and management.
|
Mota, Matheus Silva
Shadows: a new means of representing documents (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Mota2012,
abstract = {Document production tools are present everywhere, resulting in an exponential growth of increasingly complex, distributed and heterogeneous documents. This hampers document exchange, as well as their annotation and retrieval. While information retrieval mechanisms concentrate on textual features (corpus analysis), annotation approaches either target specific formats or require that a document follows interoperable standards -- defined via schemas. This work presents our effort to handle these problems, providing a more flexible solution. Rather than trying to modify or convert the document itself, or to target only textual characteristics, the strategy described in this work is based on an intermediate descriptor -- the document shadow. A shadow represents domain-relevant aspects and elements of both structure and content of a given document. Shadows are not restricted to the description of textual features, but also concern other elements, such as multimedia artifacts. Furthermore, shadows can be stored in a database, thereby supporting queries on document structure and content, regardless document formats.},
author = {Matheus Silva Mota},
date = {2012-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DissertationMatheus.pdf},
school = {Instituto de Computação - Unicamp},
title = {Shadows: a new means of representing documents},
year = {2012}
}
Document production tools are present everywhere, resulting in an exponential growth of increasingly complex, distributed and heterogeneous documents. This hampers document exchange, as well as their annotation and retrieval. While information retrieval mechanisms concentrate on textual features (corpus analysis), annotation approaches either target specific formats or require that a document follows interoperable standards -- defined via schemas. This work presents our effort to handle these problems, providing a more flexible solution. Rather than trying to modify or convert the document itself, or to target only textual characteristics, the strategy described in this work is based on an intermediate descriptor -- the document shadow. A shadow represents domain-relevant aspects and elements of both structure and content of a given document. Shadows are not restricted to the description of textual features, but also concern other elements, such as multimedia artifacts. Furthermore, shadows can be stored in a database, thereby supporting queries on document structure and content, regardless document formats.
|
Alves, Hugo Augusto
Ontologias Folksonomizadas - Uma Abordagem para Fusão de Ontologias e Folksonomias (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Alves2012,
abstract = {Um número crescente de repositórios na web se baseiam em metadados na forma de rótulos (tags) para organizar e classificar o seu conteúdo. Os usuários destes sistemas associam livremente tags a recursos do sistema – e.g., URLs, imagens, marcadores. O termo folksonomia se refere a esta classificação coletiva, que emerge do processo de rotulação (tagging) realizado por usuários interagindo em ambientes sociais na web. Uma das maiores qualidades das folksonomias é a sua simplicidade de uso pela ausência de um vocabulário controlado. Folksonomias crescem de forma orgânica, refletindo o conhecimento da comunidade de usuários. Por outro lado, esta falta de estrutura leva a dificuldades em operações de organização e descoberta de conteúdo. Melhores resultados podem ser obtidos se forem consideradas as relações semânticas entre os rótulos. Por esta razão, vários trabalhos foram propostos com o objetivo de relacionar ontologias e folksonomias, combinando a estrutura sistematizada das ontologias à semântica latente das folksonomias. Enquanto em uma direção algumas abordagens criam “ontologias sociais” a partir dos dados das folksonomias, em outra direção algumas abordagens conectam rótulos a ontologias preexistentes. Em ambos os casos nota-se uma unidirecionalidade, ou seja, um modelo apenas dá suporte ao enriquecimento do outro. Nossa proposta, por outro lado, é bidirecional. Ontologias e folksonomias são fundidas em uma nova entidade, que chamamos de “ontologia folksonomizada”, combinando aspectos complementares de ambas. O conhecimento formal e projetado das ontologias é fundido com a semântica latente dos dados sociais. Nesta dissertação apresentamos nossa ontologia folksonomizada e seus desdobramentos. Nós introduzimos aqui um framework formal para a análise de trabalhos relacionados, a fim de confrontá-los com a nossa abordagem. Além das melhorias nas operações de indexação e descoberta, que foram validadas em experimentos práticos, nós propomos uma técnica chamada 3E Steps para dar suporte à evolução de ontologias usando dados de folksonomias. Nós também implementamos o protótipo de uma ferramenta para a construção de ontologias folksonomizadas e para dar suporte à revisão de ontologias.},
author = {Hugo Augusto Alves},
date = {2012-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Ontologias-Folksonomizadas.pdf},
school = {Instituto de Computação - Unicamp},
title = {Ontologias Folksonomizadas - Uma Abordagem para Fusão de Ontologias e Folksonomias},
year = {2012}
}
Um número crescente de repositórios na web se baseiam em metadados na forma de rótulos (tags) para organizar e classificar o seu conteúdo. Os usuários destes sistemas associam livremente tags a recursos do sistema – e.g., URLs, imagens, marcadores. O termo folksonomia se refere a esta classificação coletiva, que emerge do processo de rotulação (tagging) realizado por usuários interagindo em ambientes sociais na web. Uma das maiores qualidades das folksonomias é a sua simplicidade de uso pela ausência de um vocabulário controlado. Folksonomias crescem de forma orgânica, refletindo o conhecimento da comunidade de usuários. Por outro lado, esta falta de estrutura leva a dificuldades em operações de organização e descoberta de conteúdo. Melhores resultados podem ser obtidos se forem consideradas as relações semânticas entre os rótulos. Por esta razão, vários trabalhos foram propostos com o objetivo de relacionar ontologias e folksonomias, combinando a estrutura sistematizada das ontologias à semântica latente das folksonomias. Enquanto em uma direção algumas abordagens criam “ontologias sociais” a partir dos dados das folksonomias, em outra direção algumas abordagens conectam rótulos a ontologias preexistentes. Em ambos os casos nota-se uma unidirecionalidade, ou seja, um modelo apenas dá suporte ao enriquecimento do outro. Nossa proposta, por outro lado, é bidirecional. Ontologias e folksonomias são fundidas em uma nova entidade, que chamamos de “ontologia folksonomizada”, combinando aspectos complementares de ambas. O conhecimento formal e projetado das ontologias é fundido com a semântica latente dos dados sociais. Nesta dissertação apresentamos nossa ontologia folksonomizada e seus desdobramentos. Nós introduzimos aqui um framework formal para a análise de trabalhos relacionados, a fim de confrontá-los com a nossa abordagem. Além das melhorias nas operações de indexação e descoberta, que foram validadas em experimentos práticos, nós propomos uma técnica chamada 3E Steps para dar suporte à evolução de ontologias usando dados de folksonomias. Nós também implementamos o protótipo de uma ferramenta para a construção de ontologias folksonomizadas e para dar suporte à revisão de ontologias.
|
Gatto, Sandro Danilo;
Santanchè, André
Multi-representation Lens for Visual Analytics (conference)
Proceedings of ICDE,
IEEE,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Gatto2012,
abstract = {Modern data analysis deeply relies on computational visualization tools, specially when spatial data is involved. Important efforts in governmental and private agencies are looking for patterns and insights buried in dispersive, massive amounts of data (conventional, spatiotemporal, etc.). In Visual Analytics users must be empowered to analyze data from different perspectives, integrating, transforming, aggregating and deriving new representations of conventional as well as spatial data. However, a challenge for visual analysis tools is how to articulate such wide variety of data models and formats, specially when multiple representations of geographic elements are involved. A usual approach is to convert data to a database - e.g., a multi-representation database - which centralizes and homogenizes them. This approach has restrictions when facing the dynamic and distributed model of the Web. In this paper we propose an on the fly and on demand multi-representation data integration and homogenization approach, named Lens, as an alternative that fits better with the Web. It combines a metamodel driven approach to transform data to a unifying multidimensional and multi-representation model, with a middleware-based architecture for seamless and on-the-fly data access, tailored to Visual Analytics.},
author = {Sandro Danilo Gatto and André Santanchè},
booktitle = {Proceedings of ICDE},
date = {2012-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/PID2162539.pdf},
publisher = {IEEE},
title = {Multi-representation Lens for Visual Analytics},
year = {2012}
}
Modern data analysis deeply relies on computational visualization tools, specially when spatial data is involved. Important efforts in governmental and private agencies are looking for patterns and insights buried in dispersive, massive amounts of data (conventional, spatiotemporal, etc.). In Visual Analytics users must be empowered to analyze data from different perspectives, integrating, transforming, aggregating and deriving new representations of conventional as well as spatial data. However, a challenge for visual analysis tools is how to articulate such wide variety of data models and formats, specially when multiple representations of geographic elements are involved. A usual approach is to convert data to a database - e.g., a multi-representation database - which centralizes and homogenizes them. This approach has restrictions when facing the dynamic and distributed model of the Web. In this paper we propose an on the fly and on demand multi-representation data integration and homogenization approach, named Lens, as an alternative that fits better with the Web. It combines a metamodel driven approach to transform data to a unifying multidimensional and multi-representation model, with a middleware-based architecture for seamless and on-the-fly data access, tailored to Visual Analytics.
|
Nakai, Alan Massaru
Novas Técnicas de Distribuição de Carga para Servidores Web Geograficamente Distribuídos (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Nakai2012,
abstract = {A distribuição de carga é um problema intrínseco a sistemas distribuídos. Esta tese aborda este problema no contexto de servidores web geograficamente distribuídos. A replicação de servidores web em \\emph{datacenters} distribuídos geograficamente provê tolerância a falhas e a possibilidade de fornecer melhores tempos de resposta aos clientes. Uma questão chave em cenários como este é a eficiência da solução de distribuição de carga empregada para dividir a carga do sistema entre as réplicas do servidor. A distribuição de carga permite que os provedores façam melhor uso dos seus recursos, amenizando a necessidade de provisão extra e ajudando a tolerar picos de carga até que o sistema seja ajustado. O objetivo deste trabalho foi estudar e propor novas soluções de distribuição de carga para servidores web geograficamente distribuídos. Para isso, foram implementadas duas ferramentas para apoiar a análise e o desenvolvimento de novas soluções, uma plataforma de testes construída sobre a implementação real de um serviço web e um software de simulação baseado em um modelo realístico de geração de carga para web. As principais contribuições desta tese são as propostas de quatro novas soluções de distribuição de carga que abrangem três diferentes tipos: soluções baseadas em DNS, baseadas em clientes e baseadas em servidores.},
author = {Alan Massaru Nakai},
date = {2012-01-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tese_nakai2012_final.pdf},
school = {Instituto de Computação - Unicamp},
title = {Novas Técnicas de Distribuição de Carga para Servidores Web Geograficamente Distribuídos},
year = {2012}
}
A distribuição de carga é um problema intrínseco a sistemas distribuídos. Esta tese aborda este problema no contexto de servidores web geograficamente distribuídos. A replicação de servidores web em \emph{datacenters} distribuídos geograficamente provê tolerância a falhas e a possibilidade de fornecer melhores tempos de resposta aos clientes. Uma questão chave em cenários como este é a eficiência da solução de distribuição de carga empregada para dividir a carga do sistema entre as réplicas do servidor. A distribuição de carga permite que os provedores façam melhor uso dos seus recursos, amenizando a necessidade de provisão extra e ajudando a tolerar picos de carga até que o sistema seja ajustado. O objetivo deste trabalho foi estudar e propor novas soluções de distribuição de carga para servidores web geograficamente distribuídos. Para isso, foram implementadas duas ferramentas para apoiar a análise e o desenvolvimento de novas soluções, uma plataforma de testes construída sobre a implementação real de um serviço web e um software de simulação baseado em um modelo realístico de geração de carga para web. As principais contribuições desta tese são as propostas de quatro novas soluções de distribuição de carga que abrangem três diferentes tipos: soluções baseadas em DNS, baseadas em clientes e baseadas em servidores.
|
Mota, Matheus Silva;
Medeiros, Claudia Bauzer
Introducing Shadows: Flexible Document Representation and Annotation on the Web (conference)
4th International Workshop on Data Engineering Meets the Semantic Web (DESWEB) -- co-located with 29th IEEE International Conference on Data Engineering (ICDE2013),
IEEE,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Mota2012b,
abstract = {The Web is witnessing an exponential growth of increasingly complex, distributed and heterogeneous documents. This hampers document exchange, as well as their annotation and retrieval. While information retrieval mechanisms concentrate on textual features (corpus analysis), annotation approaches either target specific formats or require that a document follows interoperable standards. This work presents our effort to handle these problems, providing a more flexible solution. Rather than trying to modify or convert the document itself, or to target only textual characteristics, the strategy described in this work is based on an intermediate descriptor -- the document shadow. A shadow represents domain-relevant aspects and elements of both structure and content of a given document, as defined by a user group. Rather than annotating documents themselves, it is the shadows that are annotated, thereby providing independence between annotations and document formats. Our annotations take advantage of the LOD initiative. Via annotations users can derive correlations across shadows, in a flexible way. Moreover, shadows and annotations are stored in databases, therefore allowing uniform database treatments of heterogeneous documents.},
author = {Matheus Silva Mota and Claudia Bauzer Medeiros},
booktitle = {4th International Workshop on Data Engineering Meets the Semantic Web (DESWEB) -- co-located with 29th IEEE International Conference on Data Engineering (ICDE2013)},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ICDEW13wkx_DESWEB_04.pdf},
publisher = {IEEE},
title = {Introducing Shadows: Flexible Document Representation and Annotation on the Web},
year = {2012}
}
The Web is witnessing an exponential growth of increasingly complex, distributed and heterogeneous documents. This hampers document exchange, as well as their annotation and retrieval. While information retrieval mechanisms concentrate on textual features (corpus analysis), annotation approaches either target specific formats or require that a document follows interoperable standards. This work presents our effort to handle these problems, providing a more flexible solution. Rather than trying to modify or convert the document itself, or to target only textual characteristics, the strategy described in this work is based on an intermediate descriptor -- the document shadow. A shadow represents domain-relevant aspects and elements of both structure and content of a given document, as defined by a user group. Rather than annotating documents themselves, it is the shadows that are annotated, thereby providing independence between annotations and document formats. Our annotations take advantage of the LOD initiative. Via annotations users can derive correlations across shadows, in a flexible way. Moreover, shadows and annotations are stored in databases, therefore allowing uniform database treatments of heterogeneous documents.
|
Malaverri, Joana E. G.;
Medeiros, Claudia Bauzer;
Lamparelli, Rubens Camargo
A Provenance Approach to Assess the Quality of Geospatial Data (conference)
27th Symposium On Applied Computing (SAC),
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2012b,
abstract = {Geographic information is present in our daily lives. This pervasiveness is also at the origin of several problems, including heterogeneity and trustworthiness -- of the data sources, of the data providers, and of the data products derived from the original sources. Most efforts to improve this situation concentrate on establishing data collection and curation standards, and quality metadata. This paper extends these efforts by presenting an approach to assess quality of geospatial data based on provenance.},
author = {Joana E. G. Malaverri and Claudia Bauzer Medeiros and Rubens Camargo Lamparelli},
booktitle = {27th Symposium On Applied Computing (SAC)},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/artigo.pdf},
title = {A Provenance Approach to Assess the Quality of Geospatial Data},
year = {2012}
}
Geographic information is present in our daily lives. This pervasiveness is also at the origin of several problems, including heterogeneity and trustworthiness -- of the data sources, of the data providers, and of the data products derived from the original sources. Most efforts to improve this situation concentrate on establishing data collection and curation standards, and quality metadata. This paper extends these efforts by presenting an approach to assess quality of geospatial data based on provenance.
|
Longo, João Sávio C.;
Camargo, Luís Theodoro O.;
Medeiros, Claudia Bauzer;
Santanchè, André
Using the DBV model to maintain versions of multi-scale geospatial data (conference)
Advances in Conceptual Modeling,
Springer-Verlag,
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Longo2012,
abstract = {Work on multi-scale issues concerning geospatial data presents countless challenges that have been long attacked by GIScience researchers. Indeed, a given real world problem must often be studied at distinct scales in order to be solved. Most implementation solutions go either towards generalization (and/or virtualization of distinct scales) or towards linking entities of interest across scales. In this context, the possibility of maintaining the history of changes at each scale is another factor to be considered. This paper presents our solution to these issues, which accommodates all previous research on handling multiple scales into a unifying framework. Our solution builds upon a specific database version model -- the multiversion MVDB -- which has already been successfully implemented in several geospatial scenarios, being extended here to support multi-scale research. The paper also presents our implementation of of a framework based on the model to handle and keep track of multi-scale data evolution.},
author = {João Sávio C. Longo and Luís Theodoro O. Camargo and Claudia Bauzer Medeiros and André Santanchè},
booktitle = {Advances in Conceptual Modeling},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dbv_multi_scale_api_lis.pdf},
pages = {284-293},
publisher = {Springer-Verlag},
title = {Using the DBV model to maintain versions of multi-scale geospatial data},
volume = {7518},
year = {2012}
}
Work on multi-scale issues concerning geospatial data presents countless challenges that have been long attacked by GIScience researchers. Indeed, a given real world problem must often be studied at distinct scales in order to be solved. Most implementation solutions go either towards generalization (and/or virtualization of distinct scales) or towards linking entities of interest across scales. In this context, the possibility of maintaining the history of changes at each scale is another factor to be considered. This paper presents our solution to these issues, which accommodates all previous research on handling multiple scales into a unifying framework. Our solution builds upon a specific database version model -- the multiversion MVDB -- which has already been successfully implemented in several geospatial scenarios, being extended here to support multi-scale research. The paper also presents our implementation of of a framework based on the model to handle and keep track of multi-scale data evolution.
|
Gomes, Alessandra;
Santanchè, André
Web-based Lab For Taxonomic Description (conference)
Anais do XI Workshop de Ferramentas e Aplicações - WebMedia,
2012.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Gomes2012,
author = {Alessandra Gomes and André Santanchè},
booktitle = {Anais do XI Workshop de Ferramentas e Aplicações - WebMedia},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Paper-TaxonomicLab-Alessandra-Andre-WFA2012.pdf},
title = {Web-based Lab For Taxonomic Description},
year = {2012}
}
|
Bernardo, Ivelize Rocha;
Santanchè, André;
Baranauskas, Maria Cecília Calani
Reconhecendo Padrões em Planilhas no domínio de uso da Biologia (conference)
Simpósio Brasileiro de Sistemas de Informação (SBSI),
2012.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Bernardo2012,
abstract = {Most of research data handled by biologists are in electronic spreadsheets. They became a popular technique to create data tables, which are easy to implement as isolated entities, but are inappropriate for integration with other spreadsheets or data sources and for enhanced queries, due to the informality of their implicit schemas. Several initiatives aim to interpret these implicit schemas of spreadsheets, making them explicit in order to drive the extraction and mapping of native data to open standards of interoperability. However, we observed limitations in such interpretation process, which is detached of the spreadsheet creation context. In this paper we present a strategy for characterizing spreadsheets, centered in their creation context, and we investigate how this characterization can be used to improve an automated interpretation and mapping of their respective schemas in the Biology usage domain. The strategy presented here is supporting a work in progress of a tool to automatically recognize spreadsheet schemas.},
author = {Ivelize Rocha Bernardo and André Santanchè and Maria Cecília Calani Baranauskas},
booktitle = {Simpósio Brasileiro de Sistemas de Informação (SBSI)},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbbd_shp_33.pdf},
pages = {360-371},
title = {Reconhecendo Padrões em Planilhas no domínio de uso da Biologia},
year = {2012}
}
Most of research data handled by biologists are in electronic spreadsheets. They became a popular technique to create data tables, which are easy to implement as isolated entities, but are inappropriate for integration with other spreadsheets or data sources and for enhanced queries, due to the informality of their implicit schemas. Several initiatives aim to interpret these implicit schemas of spreadsheets, making them explicit in order to drive the extraction and mapping of native data to open standards of interoperability. However, we observed limitations in such interpretation process, which is detached of the spreadsheet creation context. In this paper we present a strategy for characterizing spreadsheets, centered in their creation context, and we investigate how this characterization can be used to improve an automated interpretation and mapping of their respective schemas in the Biology usage domain. The strategy presented here is supporting a work in progress of a tool to automatically recognize spreadsheet schemas.
|
Alves, Hugo;
Santanchè, André
Abstract Framework for Social Ontologies and Folksonomized Ontologies (conference)
4th International Workshop on Semantic Web Information Management,
SWIM,
2012.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Alves2012b,
address = {SWIM},
author = {Hugo Alves and André Santanchè},
booktitle = {4th International Workshop on Semantic Web Information Management},
date = {2012-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/swim2012.pdf},
title = {Abstract Framework for Social Ontologies and Folksonomized Ontologies},
year = {2012}
}
|
2011 |
Mota, Matheus Silva;
Longo, João Sávio Ceregatti;
Cugler, Daniel Cintra;
Medeiros, Claudia Bauzer
Using linked data to extract geo-knowledge (conference)
XII Brazilian Symposium on GeoInformatics - GeoInfo,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Mota2011,
abstract = {There are several approaches to extract geo-knowledge from documents and textual fields in databases. Most of them focus on detecting geographic evidence, from which the associated geographic location can be determined. This paper is based on a different premise -- geo-knowledge can be extracted even from non-geographic evidence, taking advantage of the linked data paradigm. The paper gives an overview of our approach and presents two case studies to extract geo-knowledge from documents and databases in the biodiversity domain.},
author = {Matheus Silva Mota and João Sávio Ceregatti Longo and Daniel Cintra Cugler and Claudia Bauzer Medeiros},
booktitle = {XII Brazilian Symposium on GeoInformatics - GeoInfo},
date = {2011-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/paper.pdf},
title = {Using linked data to extract geo-knowledge},
year = {2011}
}
There are several approaches to extract geo-knowledge from documents and textual fields in databases. Most of them focus on detecting geographic evidence, from which the associated geographic location can be determined. This paper is based on a different premise -- geo-knowledge can be extracted even from non-geographic evidence, taking advantage of the linked data paradigm. The paper gives an overview of our approach and presents two case studies to extract geo-knowledge from documents and databases in the biodiversity domain.
|
Mota, Matheus;
Medeiros, Claudia Bauzer
Shadow-driven Document Representation: A summarization-based strategy to represent non-interoperable documents (conference)
XI Workshop on Ongoing Thesis and Dissertations - WebMedia,
SBC,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Mota2011b,
abstract = {Document production tools are present everywhere, resulting in an exponential growth of increasingly complex, dis- tributed and heterogeneous documents. This hampers document exchange, as well as their annotation, indexing and retrieval. Existing approaches to these tasks either concentrate on specific formats or require representing document’s content using interoperable standards or schema. This work presents our effort to handle this problem. Rather than try- ing to modify or convert the document itself, our strategy defines an intermediate and interoperable descriptor – shadow – that summarizes key aspects and elements of a given document, improving its annotation, indexation and retrieval process regardless of its format. Shadows can be used with different purposes, from semantic annotations and context- sensitive annotations, to content indexation and clustering.},
author = {Matheus Mota and Claudia Bauzer Medeiros},
booktitle = {XI Workshop on Ongoing Thesis and Dissertations - WebMedia},
date = {2011-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/paper-5.pdf},
pages = {4},
publisher = {SBC},
title = {Shadow-driven Document Representation: A summarization-based strategy to represent non-interoperable documents},
year = {2011}
}
Document production tools are present everywhere, resulting in an exponential growth of increasingly complex, dis- tributed and heterogeneous documents. This hampers document exchange, as well as their annotation, indexing and retrieval. Existing approaches to these tasks either concentrate on specific formats or require representing document’s content using interoperable standards or schema. This work presents our effort to handle this problem. Rather than try- ing to modify or convert the document itself, our strategy defines an intermediate and interoperable descriptor – shadow – that summarizes key aspects and elements of a given document, improving its annotation, indexation and retrieval process regardless of its format. Shadows can be used with different purposes, from semantic annotations and context- sensitive annotations, to content indexation and clustering.
|
Alves, Hugo;
Santanchè, André
Folksonomized Ontologies - from social to formal (conference)
Proceedings of XVII Brazilian Symposium on Multimedia and the Web,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Alves2011,
abstract = {An ever-increasing number of web-based repositories aimed at sharing content, links or metadata rely on tags informed by users to describe, classify and organize their data. The term folksonomy has been used to define this "social taxonomy", which emerges from tagging carried by users interacting in social environments. It contrasts with the formalism and systematic creation process applied to ontologies. In our research we propose that ontologies and folksonomies have complementary roles. The knowledge systematically organized and formalized in ontologies can be enriched and contextualized by the implicit knowledge which emerges from folksonomies. This paper presents our approach to build a "folksonomized" ontology as a confluence of a formal ontology enriched with social knowledge extracted from folksonomies. The formal embodiment of folksonomies has been explored to empower content search and classification. On the other hand, ontologies are supplied with contextual data, which can improve relationship weighting and inference operations. The paper shows a tool we have implemented to produce and use folksonomized ontologies. It was used to attest that searching operations can be improved by this combination of ontologies with folksonomies.},
author = {Hugo Alves and André Santanchè},
booktitle = {Proceedings of XVII Brazilian Symposium on Multimedia and the Web},
date = {2011-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/folksonomized-ontologies.pdf},
title = {Folksonomized Ontologies - from social to formal},
year = {2011}
}
An ever-increasing number of web-based repositories aimed at sharing content, links or metadata rely on tags informed by users to describe, classify and organize their data. The term folksonomy has been used to define this 'social taxonomy', which emerges from tagging carried by users interacting in social environments. It contrasts with the formalism and systematic creation process applied to ontologies. In our research we propose that ontologies and folksonomies have complementary roles. The knowledge systematically organized and formalized in ontologies can be enriched and contextualized by the implicit knowledge which emerges from folksonomies. This paper presents our approach to build a 'folksonomized' ontology as a confluence of a formal ontology enriched with social knowledge extracted from folksonomies. The formal embodiment of folksonomies has been explored to empower content search and classification. On the other hand, ontologies are supplied with contextual data, which can improve relationship weighting and inference operations. The paper shows a tool we have implemented to produce and use folksonomized ontologies. It was used to attest that searching operations can be improved by this combination of ontologies with folksonomies.
|
Koga, Ivo;
Medeiros, Claudia Bauzer;
Branquinho, Omar
Handling and Publishing Wireless Sensor Network Data: a hands-on experiment (article)
Journal of Computational Interdisciplinary Sciences (JCIS),
1,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Koga2011,
abstract = {eScience research, in computer science, concerns the development of tools, models and techniques to help scientists from other domains to develop their own research. One problem which is common to all fields is concerned with the management of heterogeneous data, offer- ing multiple interaction possibilities. This paper presents a proposal to help solve this problem, tailored to wireless sensor data – an im- portant data source in eScience. This proposal is illustrated with a case study.},
author = {Ivo Koga and Claudia Bauzer Medeiros and Omar Branquinho},
date = {2011-09-01},
journal = {Journal of Computational Interdisciplinary Sciences (JCIS)},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2011-09-20-publicado-JCIS-v2n1a02.pdf},
note = {J. Comp. Int. Sci., Volume 2, Issue 1, 2011, 13-22, pdn: jcis.2011.02.01.0028 © copyright 2011 PACIS [http://epacis.net/jcis.php]},
number = {1},
pages = {13-22},
title = {Handling and Publishing Wireless Sensor Network Data: a hands-on experiment},
volume = {2},
year = {2011}
}
eScience research, in computer science, concerns the development of tools, models and techniques to help scientists from other domains to develop their own research. One problem which is common to all fields is concerned with the management of heterogeneous data, offer- ing multiple interaction possibilities. This paper presents a proposal to help solve this problem, tailored to wireless sensor data – an im- portant data source in eScience. This proposal is illustrated with a case study.
|
Jomier, Genevieve;
Medeiros, Claudia Bauzer;
Santanche, Andre
The Multi-focus approach: multidisciplinary cooperations on the Web (Position paper) (article)
Proc. II Workshop of the INCT on Web Science,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Jomier2011,
abstract = {This paper is concerned with discussing issues associated with the emerging paradigm of collaborative scientific environments on the Web, and on challenges facing teams with complementary expertise, who work across the Web. The emphasis is on the multiple focuses in which these groups attack a problem, and how this can be approached from a spatio-temporal database perspective.},
author = {Genevieve Jomier and Claudia Bauzer Medeiros and Andre Santanche},
date = {2011-07-01},
journal = {Proc. II Workshop of the INCT on Web Science},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/INCT-WEB2011.pdf},
note = {Paper presented at the II Workshop at the Brazilian Institute of Web Science},
title = {The Multi-focus approach: multidisciplinary cooperations on the Web (Position paper)},
year = {2011}
}
This paper is concerned with discussing issues associated with the emerging paradigm of collaborative scientific environments on the Web, and on challenges facing teams with complementary expertise, who work across the Web. The emphasis is on the multiple focuses in which these groups attack a problem, and how this can be approached from a spatio-temporal database perspective.
|
Cugler, Daniel Cintra;
Medeiros, Claudia Bauzer;
Toledo, Luís Felipe
Managing Animal Sounds - Some Challenges and Research Directions (conference)
Proceedings V eScience Workshop - XXXI Brazilian Computer Society Conference,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Cugler2011,
abstract = {For decades, biologists around the world have recorded animal sounds. As the number of records grows, so does the difficulty to manage them, presenting challenges to save, retrieve, share and manage the sounds. This paper presents our preliminary results concerning management of large volumes of animal sound data. The paper also provides an overview from our prototype, an online environment focused on management of this data. This paper also discusses our case study, concerning more than 1 terabyte of animal recordings from Fonoteca Neotropical Jacques Vielliard, at UNICAMP, Brazil.},
author = {Daniel Cintra Cugler and Claudia Bauzer Medeiros and Luís Felipe Toledo},
booktitle = {Proceedings V eScience Workshop - XXXI Brazilian Computer Society Conference},
date = {2011-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CSBC.pdf},
title = {Managing Animal Sounds - Some Challenges and Research Directions},
year = {2011}
}
For decades, biologists around the world have recorded animal sounds. As the number of records grows, so does the difficulty to manage them, presenting challenges to save, retrieve, share and manage the sounds. This paper presents our preliminary results concerning management of large volumes of animal sound data. The paper also provides an overview from our prototype, an online environment focused on management of this data. This paper also discusses our case study, concerning more than 1 terabyte of animal recordings from Fonoteca Neotropical Jacques Vielliard, at UNICAMP, Brazil.
|
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
ORGANOGRAPHS Multi-faceted Hierarchical Categorization of Web Documents (conference)
Proceedings WEBIST - 7th International Conference on Web Information Systems,
INSTICC,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Senra2011,
abstract = {The data deluge of information in the Web challenges internauts to organize their references to interesting content in theWeb as well as in their private storage space off-line. Having an automatically managed personal index to content acquired from theWeb is useful for everybody, but critical to researchers and scholars. In this paper, we discuss concepts and problems related to organizing information through multi-faceted hierarchical categorization. We introduce the organograph as a mechanism to specify multiple views of how content is organized. Organographs can help scientists to automatically organize their documents along multiple axes, improving sharing and navigation through themes and concepts according to a particular research objective.},
author = {Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
booktitle = {Proceedings WEBIST - 7th International Conference on Web Information Systems},
date = {2011-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WebOrganization.pdf},
publisher = {INSTICC},
title = {ORGANOGRAPHS Multi-faceted Hierarchical Categorization of Web Documents},
year = {2011}
}
The data deluge of information in the Web challenges internauts to organize their references to interesting content in theWeb as well as in their private storage space off-line. Having an automatically managed personal index to content acquired from theWeb is useful for everybody, but critical to researchers and scholars. In this paper, we discuss concepts and problems related to organizing information through multi-faceted hierarchical categorization. We introduce the organograph as a mechanism to specify multiple views of how content is organized. Organographs can help scientists to automatically organize their documents along multiple axes, improving sharing and navigation through themes and concepts according to a particular research objective.
|
Nakai, Alan Massaru;
Madeira, Edmundo;
Buzato, Luiz E.
Improving the QoS of Web Services via Client-Based Load Distribution (conference)
XXIX Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (Aceito para apresentação),
2011.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Nakai2011,
abstract = {The replication of a web service over geographically distributed locations can improve the QoS perceived by its clients. An important issue in such a deployment is the efficiency of the policy applied to distribute client requests among the replicas. In this paper, we propose a new approach for client-based load distribution that adaptively changes the fraction of load each client submits to each service replica to try to minimize overall response times. Our results show that the proposed strategy can achieve better response times than algorithms that eagerly try to choose the best replica for each client.},
author = {Alan Massaru Nakai and Edmundo Madeira and Luiz E. Buzato},
booktitle = {XXIX Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (Aceito para apresentação)},
date = {2011-05-01},
keyword = {Conference},
note = {Aceito para apresentação},
title = {Improving the QoS of Web Services via Client-Based Load Distribution},
year = {2011}
}
The replication of a web service over geographically distributed locations can improve the QoS perceived by its clients. An important issue in such a deployment is the efficiency of the policy applied to distribute client requests among the replicas. In this paper, we propose a new approach for client-based load distribution that adaptively changes the fraction of load each client submits to each service replica to try to minimize overall response times. Our results show that the proposed strategy can achieve better response times than algorithms that eagerly try to choose the best replica for each client.
|
Fedel, Gabriel de Souza
Busca multimodal para apoio à pesquisa em biodiversidade (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deFedel2011,
abstract = {A pesquisa em computação aplicada à biodiversidade apresenta muitos desafios, como a existência de grande quantidade de dados e sua heterogeneidade e variedade. As ferramentas de busca disponíveis para tais dados ainda são limitadas e normalmente só consideram dados textuais, deixando de explorar a potencialidade da busca por dados de outra natureza, como imagens ou sons. O objetivo deste projeto é analisar os problemas de realizar consultas multimodais com texto e imagem para o domínio de biodiversidade, propondo um conjunto de ferramentas para processar tais consultas. Espera-se que com esta busca integrada a recuperação dos dados de biodiversidade se torne mais abrangente, auxiliando os pesquisadores em biodiversidade em suas tarefas, além de incentivar que usuários leigos acessem esses dados. Este trabalho está inserido no projeto BioCORE, uma parceria entre pesquisadores de computação e biologia para aperfeiçoar a pesquisa em biodiversidade.},
author = {Gabriel de Souza Fedel},
date = {2011-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao2.pdf},
school = {Instituto de Computação - Unicamp},
title = {Busca multimodal para apoio à pesquisa em biodiversidade},
year = {2011}
}
A pesquisa em computação aplicada à biodiversidade apresenta muitos desafios, como a existência de grande quantidade de dados e sua heterogeneidade e variedade. As ferramentas de busca disponíveis para tais dados ainda são limitadas e normalmente só consideram dados textuais, deixando de explorar a potencialidade da busca por dados de outra natureza, como imagens ou sons. O objetivo deste projeto é analisar os problemas de realizar consultas multimodais com texto e imagem para o domínio de biodiversidade, propondo um conjunto de ferramentas para processar tais consultas. Espera-se que com esta busca integrada a recuperação dos dados de biodiversidade se torne mais abrangente, auxiliando os pesquisadores em biodiversidade em suas tarefas, além de incentivar que usuários leigos acessem esses dados. Este trabalho está inserido no projeto BioCORE, uma parceria entre pesquisadores de computação e biologia para aperfeiçoar a pesquisa em biodiversidade.
|
Nakai, Alan Massaru;
Madeira, Edmundo;
Buzato, Luiz E.
Load Balancing for Internet Distributed Services using Limited Redirection Rates (conference)
Proceedings of the 5th Latin-American Symposium on Dependable Computing,
2011.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Nakai2011b,
abstract = {The Internet has become the universal support for computer applications. This increases the need for solutions that provide dependability and QoS for web applications. The replication of web servers on geographically distributed datacenters allows the service provider to tolerate disastrous failures and to improve the response times perceived by clients. A key issue for good performance of worldwide distributed web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicated servers. Load balancing can reduce the need for over-provision of resources, and help tolerate abrupt load peaks and/or partial failures through load conditioning. In this paper, we propose a new load balancing solution that reduces service response times by redirecting requests to the closest remote servers without overloading them. We also describe a middleware that implements this protocol and present the results of a set of simulations that show its usefulness.},
author = {Alan Massaru Nakai and Edmundo Madeira and Luiz E. Buzato},
booktitle = {Proceedings of the 5th Latin-American Symposium on Dependable Computing},
date = {2011-04-01},
keyword = {Conference},
title = {Load Balancing for Internet Distributed Services using Limited Redirection Rates},
year = {2011}
}
The Internet has become the universal support for computer applications. This increases the need for solutions that provide dependability and QoS for web applications. The replication of web servers on geographically distributed datacenters allows the service provider to tolerate disastrous failures and to improve the response times perceived by clients. A key issue for good performance of worldwide distributed web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicated servers. Load balancing can reduce the need for over-provision of resources, and help tolerate abrupt load peaks and/or partial failures through load conditioning. In this paper, we propose a new load balancing solution that reduces service response times by redirecting requests to the closest remote servers without overloading them. We also describe a middleware that implements this protocol and present the results of a set of simulations that show its usefulness.
|
Gomes, Alessandra;
Santanchè, André
Autoria Virtual Baseada no Mundo Real (conference)
Anais do X Workshop de Ferramentas e Aplicações - WebMedia,
2011.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{eSantanche2011,
author = {Alessandra Gomes and André Santanchè},
booktitle = {Anais do X Workshop de Ferramentas e Aplicações - WebMedia},
date = {2011-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/autoria-virtual-baseada-em-dados-do-mundo-real.pdf},
title = {Autoria Virtual Baseada no Mundo Real},
year = {2011}
}
|
Costa, Taluna Mendes d'Araújo;
Santanchè, André
Padrão de Anotação Semântica de Código e sua Aplicação no Desenvolvimento de Componentes (conference)
Anais do V Simpósio Brasileiro de Componentes, Arquiteturas e Reutilização de Software,
2011.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{dAraujoeSantanche2011,
author = {Taluna Mendes d'Araújo Costa and André Santanchè},
booktitle = {Anais do V Simpósio Brasileiro de Componentes, Arquiteturas e Reutilização de Software},
date = {2011-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/anotacao-semantica-codigo1.pdf},
title = {Padrão de Anotação Semântica de Código e sua Aplicação no Desenvolvimento de Componentes},
year = {2011}
}
|
Medeiros, Claudia Bauzer;
Santanche, Andre;
Madeira, Edmundo;
Martins, Eliane;
Magalhaes, Geovane;
Baranauskas, Maria Cecilia;
Leite, Neucimar;
Torres, Ricardo da Silva
Data Driven Research at LIS: the Laboratory of Information Systems at UNICAMP (article)
JIDM,
2,
2011.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Medeiros2011,
abstract = {This article presents an overview of the research conducted at the Laboratory of Information Systems (LIS) at the Institute of Computing, UNICAMP. Its creation, in 1994, was motivated by the need to support data-driven research within multidisciplinary projects involving computer scientists and scientists from other fields. Throughout the years, it has housed projects in many domains - in agriculture, biodiversity, medicine, health, bioinformatics, urban planning, telecommunications, and sports - with scientific results in these fields and in Computer Science, with emphasis in data management, integrating research on databases, image processing, human-computer interfaces, software engineering and computer networks. The research produced 14 PhD theses, 70 MSc dissertations, 40$+$ journal papers and 200$+$ conference papers, having been assisted by over 80 undergraduate student scholarships. Several of these results were obtained through cooperation with many Brazilian universities and research centers, as well as groups in Canada, USA, France, Germany, the Netherlands and Portugal. The authors of this article are faculty at the Institute whose students developed their MSc or PhD research in the lab. For additional details, online systems, papers and reports, see http://www.lis.ic.unicamp.br and http://www.lis.ic.unicamp.br/publications},
author = {Claudia Bauzer Medeiros and Andre Santanche and Edmundo Madeira and Eliane Martins and Geovane Magalhaes and Maria Cecilia Baranauskas and Neucimar Leite and Ricardo da Silva Torres},
date = {2011-01-01},
journal = {JIDM},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Versao-FINAL.pdf},
number = {2},
pages = {93-108},
title = {Data Driven Research at LIS: the Laboratory of Information Systems at UNICAMP},
volume = {2},
year = {2011}
}
This article presents an overview of the research conducted at the Laboratory of Information Systems (LIS) at the Institute of Computing, UNICAMP. Its creation, in 1994, was motivated by the need to support data-driven research within multidisciplinary projects involving computer scientists and scientists from other fields. Throughout the years, it has housed projects in many domains - in agriculture, biodiversity, medicine, health, bioinformatics, urban planning, telecommunications, and sports - with scientific results in these fields and in Computer Science, with emphasis in data management, integrating research on databases, image processing, human-computer interfaces, software engineering and computer networks. The research produced 14 PhD theses, 70 MSc dissertations, 40$+$ journal papers and 200$+$ conference papers, having been assisted by over 80 undergraduate student scholarships. Several of these results were obtained through cooperation with many Brazilian universities and research centers, as well as groups in Canada, USA, France, Germany, the Netherlands and Portugal. The authors of this article are faculty at the Institute whose students developed their MSc or PhD research in the lab. For additional details, online systems, papers and reports, see http://www.lis.ic.unicamp.br and http://www.lis.ic.unicamp.br/publications
|
Mariote, Leonardo;
Medeiros, Claudia Bauzer;
Torres, Ricardo da Silva;
Bueno, Lucas M.
TIDES—a new descriptor for time series oscillation behavior (article)
Geoinformatica,
2011.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Mariote2011,
abstract = {Sensor networks have increased the amount and variety of temporal data available, requiring the definition of new techniques for data mining. Related research typically addresses the problems of indexing, clustering, classification, summarization, and anomaly detection. There is a wide range of techniques to describe and compare time series, but they focus on series’ values. This paper concentrates on a new aspect—that of describing oscillation patterns. It presents a technique for time series similarity search, and multiple temporal scales, defining a descriptor that uses the angular coefficients from a linear segmentation of the curve that represents the evolution of the analyzed series. This technique is generalized to handle co-evolution, in which several phenomena vary at the same time. Preliminary experiments with real datasets showed that our approach correctly characterizes the oscillation of single time series, for multiple time scales, and is able to compute the similarity among sets of co-evolving series.},
author = {Leonardo Mariote and Claudia Bauzer Medeiros and Ricardo da Silva Torres and Lucas M. Bueno},
date = {2011-01-01},
journal = {Geoinformatica},
keyword = {Article},
pages = {75-109},
title = {TIDES—a new descriptor for time series oscillation behavior},
volume = {15},
year = {2011}
}
Sensor networks have increased the amount and variety of temporal data available, requiring the definition of new techniques for data mining. Related research typically addresses the problems of indexing, clustering, classification, summarization, and anomaly detection. There is a wide range of techniques to describe and compare time series, but they focus on series’ values. This paper concentrates on a new aspect—that of describing oscillation patterns. It presents a technique for time series similarity search, and multiple temporal scales, defining a descriptor that uses the angular coefficients from a linear segmentation of the curve that represents the evolution of the analyzed series. This technique is generalized to handle co-evolution, in which several phenomena vary at the same time. Preliminary experiments with real datasets showed that our approach correctly characterizes the oscillation of single time series, for multiple time scales, and is able to compute the similarity among sets of co-evolving series.
|
2010 |
Carromeu, Camilo;
Medeiros, Claudia Bauzer
Spatial Monitoring of Cattle – Impact on the Carbon Cycle (conference)
Proc. GeoChange 2010 - Research Symposium GIScience for Environmental Change,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Carromeu2010,
abstract = {There is a growing demand for accurate information about the real environmental impact caused by cattle, accompanied by a concern for increased production of cattle related products in a sustainable manner. With the widespread adoption of RFID chips for bovine traceability and new technologies for measuring carbon dioxide in the atmosphere, it is now feasible to develop carbon cycle models that combine such factors. This presents challenges that range from data management to model specification and validation, to correlate animal movements and their impact on different biomes. This paper presents a proposal towards this goal, concerned with the creation of a framework to store and index semantic space trajectories of livestock to enable monitoring of the production of CO2.},
author = {Camilo Carromeu and Claudia Bauzer Medeiros},
booktitle = {Proc. GeoChange 2010 - Research Symposium GIScience for Environmental Change},
date = {2010-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/camilo-geochange_2010.pdf},
title = {Spatial Monitoring of Cattle – Impact on the Carbon Cycle},
year = {2010}
}
There is a growing demand for accurate information about the real environmental impact caused by cattle, accompanied by a concern for increased production of cattle related products in a sustainable manner. With the widespread adoption of RFID chips for bovine traceability and new technologies for measuring carbon dioxide in the atmosphere, it is now feasible to develop carbon cycle models that combine such factors. This presents challenges that range from data management to model specification and validation, to correlate animal movements and their impact on different biomes. This paper presents a proposal towards this goal, concerned with the creation of a framework to store and index semantic space trajectories of livestock to enable monitoring of the production of CO2.
|
Fedel, Gabriel de Souza;
Medeiros, Claudia Bauzer
Busca multimodal para apoio à pesquisa em biodiversidade (conference)
WTDBD - Workshop de Teses e Dissertações em Bancos de Dados,
2010.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{deFedel2010,
author = {Gabriel de Souza Fedel and Claudia Bauzer Medeiros},
booktitle = {WTDBD - Workshop de Teses e Dissertações em Bancos de Dados},
date = {2010-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ArtigoWTDBD2010.pdf},
title = {Busca multimodal para apoio à pesquisa em biodiversidade},
year = {2010}
}
|
Malaverri, Joana E. Gonzales;
Medeiros, Claudia Bauzer
Handling Provenance in Biodiversity (conference)
Workshop on Challenges in eScience (CIS),
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2010,
abstract = {One of the concerns in eScience research is the design and development of novel solutions to support distributed collaboration. In this context, regardless of the scientific domain, an important problem is the reproducibility of the results from scientific activities, considering the heterogeneous data involved and the specific research context. This paper presents a proposal to help solve this problem, proposing a software architecture to handle provenance issues.},
author = {Joana E. Gonzales Malaverri and Claudia Bauzer Medeiros},
booktitle = {Workshop on Challenges in eScience (CIS)},
date = {2010-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/joanaCIS_CamRea.pdf},
title = {Handling Provenance in Biodiversity},
year = {2010}
}
One of the concerns in eScience research is the design and development of novel solutions to support distributed collaboration. In this context, regardless of the scientific domain, an important problem is the reproducibility of the results from scientific activities, considering the heterogeneous data involved and the specific research context. This paper presents a proposal to help solve this problem, proposing a software architecture to handle provenance issues.
|
Santanchè, André;
Baumann, Peter
Component-based Web Clients For Scientific Data Exploration Using The DCC Framework (conference)
GIScience 2010,
Zurich, Switzerland,
2010.
(
BibTeX |
Tags:
Conference
)
@conference{Santanche2010b,
address = {Zurich, Switzerland},
author = {André Santanchè and Peter Baumann},
booktitle = {GIScience 2010},
date = {2010-09-01},
keyword = {Conference},
title = {Component-based Web Clients For Scientific Data Exploration Using The DCC Framework},
year = {2010}
}
|
Koga, Ivo;
Medeiros, Claudia Bauzer;
Branquinho, Omar
Handling and Publishing Wireless Sensor Network Data: a hands-on experiment (conference)
Proceedings IV eScience Workshop - XXX Brazilian Computer Society Conference,
SBC,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Koga2010,
abstract = {eScience research, in computer science, concerns the development of tools, models and techniques to help scientists from other domains to develop their own research. One problem which is common to all is concerned with management of heterogeneous data offering multiple interaction possibilities. This paper presents a proposal to help solve this problem, tailored to wireless sensor data – an important data source in eScience. This proposal is illustrated with a case study.},
author = {Ivo Koga and Claudia Bauzer Medeiros and Omar Branquinho},
booktitle = {Proceedings IV eScience Workshop - XXX Brazilian Computer Society Conference},
date = {2010-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CSBC-eScience2010.pdf},
publisher = {SBC},
title = {Handling and Publishing Wireless Sensor Network Data: a hands-on experiment},
year = {2010}
}
eScience research, in computer science, concerns the development of tools, models and techniques to help scientists from other domains to develop their own research. One problem which is common to all is concerned with management of heterogeneous data offering multiple interaction possibilities. This paper presents a proposal to help solve this problem, tailored to wireless sensor data – an important data source in eScience. This proposal is illustrated with a case study.
|
Santos, Jefersson Alex dos;
Penatti, Otávio A. B.;
Torres, Ricardo da Silva
Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification (conference)
Proceedings of International Conference on Computer Vision Theory and Applications (VISAPP 2010),
Angers, France,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{dosSantos2010b,
abstract = {Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that can be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We perform experiments to evaluate the effectiveness of these descriptors, considering image retrieval and classification tasks. To evaluate descriptors in classification tasks, we also propose a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the best results.},
address = {Angers, France},
author = {Jefersson Alex dos Santos and Otávio A. B. Penatti and Ricardo da Silva Torres},
booktitle = {Proceedings of International Conference on Computer Vision Theory and Applications (VISAPP 2010)},
date = {2010-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/visapp2010.pdf},
title = {Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification},
year = {2010}
}
Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that can be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We perform experiments to evaluate the effectiveness of these descriptors, considering image retrieval and classification tasks. To evaluate descriptors in classification tasks, we also propose a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the best results.
|
Sousa, Sidney Roberto de
Management of Semantic Annotations of Data on the Web for Agricultural Applications (partially in english) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deSousa2010,
abstract = {Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. To attack this problem, this dissertation proposes a servicebased architecture to manage semantic annotations. In this work, a semantic annotation is a set of triples - called semantic annotation units - < subject,metadata field, object >, where subject is a geospatial resource, (metadata field) contains some characteristic about this resource, and object is an ontology term that semantically associates the metadata field to some appropriate concept. The main contributions of this dissertation are: a comparative study on annotation tools; specification and implementation of a service-based architecture to manage semantic annotations, including services for handling ontology terms; and a comparative analysis of mechanisms for storing semantic annotations. The work takes as case study semantic annotations about agricultural resources.},
author = {Sidney Roberto de Sousa},
date = {2010-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DissertacaoSidney.pdf},
school = {Instituto de Computação - Unicamp},
title = {Management of Semantic Annotations of Data on the Web for Agricultural Applications (partially in english)},
year = {2010}
}
Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. To attack this problem, this dissertation proposes a servicebased architecture to manage semantic annotations. In this work, a semantic annotation is a set of triples - called semantic annotation units - < subject,metadata field, object >, where subject is a geospatial resource, (metadata field) contains some characteristic about this resource, and object is an ontology term that semantically associates the metadata field to some appropriate concept. The main contributions of this dissertation are: a comparative study on annotation tools; specification and implementation of a service-based architecture to manage semantic annotations, including services for handling ontology terms; and a comparative analysis of mechanisms for storing semantic annotations. The work takes as case study semantic annotations about agricultural resources.
|
Penatti, Otávio Augusto Bizetto;
Torres, Ricardo da Silva
Eva - An Evaluation Tool for Comparing Descriptors in Content-Based Image Retrieval Tasks (conference)
11th ACM SIGMM International Conference on Multimedia Information Retrieval,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Penatti2010,
abstract = {This paper presents Eva, a tool for evaluating image descriptors for content-based image retrieval. Eva integrates the most common stages of an image retrieval process and provides functionalities to facilitate the comparison of image descriptors in the context of content-based image retrieval. Eva supports the management of image descriptors and image collections and creates a standardized environment to run comparative experiments using them.},
author = {Otávio Augusto Bizetto Penatti and Ricardo da Silva Torres},
booktitle = {11th ACM SIGMM International Conference on Multimedia Information Retrieval},
date = {2010-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/mir106o-penatti.pdf},
title = {Eva - An Evaluation Tool for Comparing Descriptors in Content-Based Image Retrieval Tasks},
year = {2010}
}
This paper presents Eva, a tool for evaluating image descriptors for content-based image retrieval. Eva integrates the most common stages of an image retrieval process and provides functionalities to facilitate the comparison of image descriptors in the context of content-based image retrieval. Eva supports the management of image descriptors and image collections and creates a standardized environment to run comparative experiments using them.
|
A., Faria F.;
A., Veloso,;
E., Valle;
R., Torres;
Gonçalves, ;
W., Meira
Learning to Rank for Content-Based Image Retrieval (conference)
ACM International Conference on Multimedia Information Retrieval (MIR 2010),
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Faria2010b,
abstract = {In Content-based Image Retrieval (CBIR), accurately ranking the returned images is of paramount importance, since users consider mostly the topmost results. The typical ranking strategy used by many CBIR systems is to employ image content descriptors, so that returned images that are most similar to the query image are placed higher in the rank. While this strategy is well accepted and widely used, improved results may be obtained by combining multiple image descriptors. In this paper we explore this idea, and introduce algorithms that learn to combine information coming from different descriptors. The proposed learning to rank algorithms are based on three diverse learning techniques: Support Vector Machines (CBIR-SVM), Genetic Programming (CBIR-GP), and Association Rules (CBIR-AR). Eighteen image content descriptors (color, texture, and shape information) are used as input and provided as training to the learning algorithms. We performed a systematic evaluation involving two complex and heterogeneous image databases (Corel e Caltech) and two evaluation measures (Precision and MAP). The empirical results show that all learning algorithms provide significant gains when compared to the typical ranking strategy in which descriptors are used in isolation. We concluded that, in general, CBIR-AR and CBIR-GP outperforms CBIR-SVM. A fine-grained analysis revealed the lack of correlation between the results provided by CBIR-AR and the results provided by the other two algorithms, which indicates the opportunity of an advantageous hybrid approach.},
author = {Faria F. A. and Veloso, A. and Valle E. and Torres R. and Gonçalves and Meira W.},
booktitle = {ACM International Conference on Multimedia Information Retrieval (MIR 2010)},
date = {2010-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/mir067-faria.pdf},
title = {Learning to Rank for Content-Based Image Retrieval},
year = {2010}
}
In Content-based Image Retrieval (CBIR), accurately ranking the returned images is of paramount importance, since users consider mostly the topmost results. The typical ranking strategy used by many CBIR systems is to employ image content descriptors, so that returned images that are most similar to the query image are placed higher in the rank. While this strategy is well accepted and widely used, improved results may be obtained by combining multiple image descriptors. In this paper we explore this idea, and introduce algorithms that learn to combine information coming from different descriptors. The proposed learning to rank algorithms are based on three diverse learning techniques: Support Vector Machines (CBIR-SVM), Genetic Programming (CBIR-GP), and Association Rules (CBIR-AR). Eighteen image content descriptors (color, texture, and shape information) are used as input and provided as training to the learning algorithms. We performed a systematic evaluation involving two complex and heterogeneous image databases (Corel e Caltech) and two evaluation measures (Precision and MAP). The empirical results show that all learning algorithms provide significant gains when compared to the typical ranking strategy in which descriptors are used in isolation. We concluded that, in general, CBIR-AR and CBIR-GP outperforms CBIR-SVM. A fine-grained analysis revealed the lack of correlation between the results provided by CBIR-AR and the results provided by the other two algorithms, which indicates the opportunity of an advantageous hybrid approach.
|
Faria, Fábio Augusto
Uso de Técnicas de Aprendizagem para Classificação e Recuperação de Imagens (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Faria2010,
abstract = {Ténicas de aprendizagem vêm sendo empregadas em diversas áreas de aplicação (medicina, biologia, seguran ̧a, entre outras). Neste trabalho, buscou-se avaliar o uso da técnica de Programação Genética (PG) em tarefas de recuperação e classificação imagens. PG busca soluções ó́timas inspirada pela teoria de seleção natural das espécies. Indivíduos mais aptos (melhores soluções) tendem a evoluir e se reproduzir nas gerações futuras.},
author = {Fábio Augusto Faria},
date = {2010-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/fabio_faria.pdf},
school = {Instituto de Computação - Unicamp},
title = {Uso de Técnicas de Aprendizagem para Classificação e Recuperação de Imagens},
year = {2010}
}
Ténicas de aprendizagem vêm sendo empregadas em diversas áreas de aplicação (medicina, biologia, seguran ̧a, entre outras). Neste trabalho, buscou-se avaliar o uso da técnica de Programação Genética (PG) em tarefas de recuperação e classificação imagens. PG busca soluções ó́timas inspirada pela teoria de seleção natural das espécies. Indivíduos mais aptos (melhores soluções) tendem a evoluir e se reproduzir nas gerações futuras.
|
Medeiros, C. B.;
Joliveau, M.;
Jomier, G.;
Vuyst, F. de
Managing sensor traffic data and forecasting unusual behaviour propagation (article)
Geoinformatica,
3,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
sensor data, time series, traffic management
)
@article{mejojovu10,
abstract = {Sensor data on traffic events have prompted a wide range of research issues, related
with the so-called ITS (Intelligent Transportation Systems). Data are delivered for
both static (fixed) and mobile (embedded) sensors, generating large and complex spatiotemporal series. This scenario presents several research challenges, in spatio-temporal data management and data analysis. Management issues involve, for instance, data cleaning and data fusion to support queries at distinct spatial and temporal granularities. Analysis issues
include the characterization of traffic behavior for given space and/or time windows, and
detection of anomalous behavior (either due to sensor malfunction, or to traffic events).
This paper contributes to the solution of some of these issues through a new kind of
framework to manage static sensor data. Our work is based on combining research on analytical methods to process sensor data, and data management strategies to query these data. The first component is geared towards supporting pattern matching. This leads to a model to study and predict unusual traffic behavior along an urban road network. The second component deals with spatio-temporal database issues, taking into account information produced by the model. This allows distinct granularities and modalities of analysis of sensor data in space and time. This work was conducted within a project that uses real data, with tests
conducted on 1000 sensors, during 3 years, in a large French city.},
author = {C. B. Medeiros and M. Joliveau and G. Jomier and F. de Vuyst},
date = {2010-02-28},
journal = {Geoinformatica},
keyword = {sensor data, time series, traffic management},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2016/09/MedeirosJoliveauJomierDeVuyst.pdf},
number = {3},
pages = {279-305},
title = {Managing sensor traffic data and forecasting unusual behaviour propagation},
volume = {14},
year = {2010}
}
Sensor data on traffic events have prompted a wide range of research issues, related with the so-called ITS (Intelligent Transportation Systems). Data are delivered for both static (fixed) and mobile (embedded) sensors, generating large and complex spatiotemporal series. This scenario presents several research challenges, in spatio-temporal data management and data analysis. Management issues involve, for instance, data cleaning and data fusion to support queries at distinct spatial and temporal granularities. Analysis issues include the characterization of traffic behavior for given space and/or time windows, and detection of anomalous behavior (either due to sensor malfunction, or to traffic events).
This paper contributes to the solution of some of these issues through a new kind of framework to manage static sensor data. Our work is based on combining research on analytical methods to process sensor data, and data management strategies to query these data. The first component is geared towards supporting pattern matching. This leads to a model to study and predict unusual traffic behavior along an urban road network. The second component deals with spatio-temporal database issues, taking into account information produced by the model. This allows distinct granularities and modalities of analysis of sensor data in space and time. This work was conducted within a project that uses real data, with tests conducted on 1000 sensors, during 3 years, in a large French city.
|
Macário, Carla Geovana N.;
Santos, Jefersson A. dos;
Medeiros, Claudia Bauzer;
Torres, Ricardo da S.
Annotating data to support decision-making: a case study (conference)
6th Workshop on Geographic Information Retrieval (GIR'10),
Zurich, Switzerland,
2010.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2010,
address = {Zurich, Switzerland},
author = {Carla Geovana N. Macário and Jefersson A. dos Santos and Claudia Bauzer Medeiros and Ricardo da S. Torres},
booktitle = {6th Workshop on Geographic Information Retrieval (GIR'10)},
date = {2010-02-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/acmGIR.pdf},
title = {Annotating data to support decision-making: a case study},
year = {2010}
}
|
Santos, Jefersson A. dos;
Ferreira, Cristiano D.;
Torres, Ricardo da S.;
Gonçalvez, Marcos A.;
Lamparelli, Rubens A. C.
A Relevance Feedback Method based on Genetic Programming for Classification of Remote Sensing Images (article)
Information Sciences,
2010.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{dosSantos2010,
abstract = {This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions of interest (and those which are not). This feedback information is employed by a genetic programming approach to learn user preferences and combine image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective for image classification tasks and outperforms some recent and effective as well as traditional baselines for the problem.},
author = {Jefersson A. dos Santos and Cristiano D. Ferreira and Ricardo da S. Torres and Marcos A. Gonçalvez and Rubens A. C. Lamparelli},
date = {2010-01-01},
journal = {Information Sciences},
keyword = {Article},
note = {Accepted for publication},
title = {A Relevance Feedback Method based on Genetic Programming for Classification of Remote Sensing Images},
year = {2010}
}
This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions of interest (and those which are not). This feedback information is employed by a genetic programming approach to learn user preferences and combine image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective for image classification tasks and outperforms some recent and effective as well as traditional baselines for the problem.
|
Santanchè, André;
Silva, Luiz Augusto Matos da
Document-centered Learning Object Authoring (article)
Learning Technology Newsletter,
1,
2010.
(
Links |
BibTeX |
Tags:
Article
)
@article{Santanche2010,
author = {André Santanchè and Luiz Augusto Matos da Silva},
date = {2010-01-01},
journal = {Learning Technology Newsletter},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ieeetclt2009-v10.pdf},
number = {1},
pages = {58-61},
title = {Document-centered Learning Object Authoring},
volume = {12},
year = {2010}
}
|
Nakai, Alan;
Madeira, Edmundo;
Buzato, Luiz Eduardo
DNS-based Load Balancing for Web Services (conference)
Webist 2010,
2010.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Nakai2010,
abstract = {A key issue for good performance of geographically replicated web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicas. This work revisits the research on DNS-based load balancing mechanisms considering a SOA (Service-Oriented Architecture) scenario. In this kind of load balancing solution the Authoritative DNS (ADNS) of the distributed web service performs the role of the client request scheduler, redirecting the clients to one of the server replicas, according to some load distribution policy. This paper proposes a new policy that combines client load information and server load information in order to reduce the negative effects of the DNS caching on the load balancing. We also present the results obtained through an experimental tesbed built on basis of the TPC-W benchmark},
author = {Alan Nakai and Edmundo Madeira and Luiz Eduardo Buzato},
booktitle = {Webist 2010},
date = {2010-01-01},
keyword = {Conference},
title = {DNS-based Load Balancing for Web Services},
year = {2010}
}
A key issue for good performance of geographically replicated web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicas. This work revisits the research on DNS-based load balancing mechanisms considering a SOA (Service-Oriented Architecture) scenario. In this kind of load balancing solution the Authoritative DNS (ADNS) of the distributed web service performs the role of the client request scheduler, redirecting the clients to one of the server replicas, according to some load distribution policy. This paper proposes a new policy that combines client load information and server load information in order to reduce the negative effects of the DNS caching on the load balancing. We also present the results obtained through an experimental tesbed built on basis of the TPC-W benchmark
|
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
Database Descriptors: Laying the Path to Commodity Web Data Services (conference)
17th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros2010,
abstract = {The growth of the Internet has dramatically changed the way information is accessed and managed. The Web contains an ever growing amount of distributed, semi-structured and uncontrolled data. In this new context, we should rethink how applications couple with DBMSs. Corporate intranets allowed a tiered coupling between applications and databases. However, that model is still too constrained, and unable to accommodate the hostility, unsafety and fast pace of the Web environment. Web Applications soon, if not already, will seek to dynamically negotiate their relationship to distributed database services. Prior to accomplishing autonomous application_to_DBMS binding and seamless data migration, we need to devise a "lingua franca" to request and describe DBMS and database services and capabilities. Database descriptors (DBDs) are a step towards this vision. This paper presents the motivation for DBDs, their structure and architecture, examples and a use case scenario.},
author = {Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
booktitle = {17th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems},
date = {2010-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/05457746.pdf},
pages = {386-392},
title = {Database Descriptors: Laying the Path to Commodity Web Data Services},
year = {2010}
}
The growth of the Internet has dramatically changed the way information is accessed and managed. The Web contains an ever growing amount of distributed, semi-structured and uncontrolled data. In this new context, we should rethink how applications couple with DBMSs. Corporate intranets allowed a tiered coupling between applications and databases. However, that model is still too constrained, and unable to accommodate the hostility, unsafety and fast pace of the Web environment. Web Applications soon, if not already, will seek to dynamically negotiate their relationship to distributed database services. Prior to accomplishing autonomous application_to_DBMS binding and seamless data migration, we need to devise a 'lingua franca' to request and describe DBMS and database services and capabilities. Database descriptors (DBDs) are a step towards this vision. This paper presents the motivation for DBDs, their structure and architecture, examples and a use case scenario.
|
Malaverri, Joana E. Gonzales;
Medeiros, Claudia Bauzer
Handling Provenance in Biodiversity (conference)
Workshop Challenges in eScience,
CIS,
2010.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2010b,
address = {CIS},
author = {Joana E. Gonzales Malaverri and Claudia Bauzer Medeiros},
booktitle = {Workshop Challenges in eScience},
date = {2010-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CIS-painel-1-Joana.ppt},
title = {Handling Provenance in Biodiversity},
year = {2010}
}
|
Jr, Gilberto Zonta Pastorello;
Daltio, Jaudete;
Medeiros, Claudia Bauzer
A Mechanism for Propagation of Semantic Annotations of Multimedia Content (article)
Journal of Multimedia,
2010.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Jr2010,
abstract = {Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Data annotations are used, among others, to provide context information and enhance content management, making it easier to interpret and share data. However, raw multimedia data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original annotations from the production phase are often discarded or ignored, since their usefulness is usually limited to the first transformation step. New annotations must be made at each step, and associated with the final product, a time consuming task often carried out manually. The task of systematically associating new annotations to the result of each data transformation step is known as annotation propagation. This paper introduces techniques for structuring and propagating annotations, in parallel to the data transformation processes, thereby alleviating the overhead and decreasing the errors introduced by manual annotation. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations; (ii) a set of transformations rules, based on ontological relations; and, (iii) workflows that deal with interrelated processing steps.},
author = {Gilberto Zonta Pastorello Jr and Jaudete Daltio and Claudia Bauzer Medeiros},
date = {2010-01-01},
journal = {Journal of Multimedia},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr-daltio-medeiros_jmm2009_final.pdf},
note = {Accepted for publication},
title = {A Mechanism for Propagation of Semantic Annotations of Multimedia Content},
year = {2010}
}
Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Data annotations are used, among others, to provide context information and enhance content management, making it easier to interpret and share data. However, raw multimedia data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original annotations from the production phase are often discarded or ignored, since their usefulness is usually limited to the first transformation step. New annotations must be made at each step, and associated with the final product, a time consuming task often carried out manually. The task of systematically associating new annotations to the result of each data transformation step is known as annotation propagation. This paper introduces techniques for structuring and propagating annotations, in parallel to the data transformation processes, thereby alleviating the overhead and decreasing the errors introduced by manual annotation. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations; (ii) a set of transformations rules, based on ontological relations; and, (iii) workflows that deal with interrelated processing steps.
|
2009 |
Santos, Jefersson Alex dos;
Penatti, Otávio A. B.;
Torres, Ricardo da Silva
Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-09-47,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{dosSantos2009b,
abstract = {Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that can be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We highlight the main characteristics and perform experiments to evaluate the effectiveness of these descriptors. To evaluate descriptors in classification tasks, we also proposed a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the best results.},
author = {Jefersson Alex dos Santos and Otávio A. B. Penatti and Ricardo da Silva Torres},
date = {2009-12-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/09-47.pdf},
number = {IC-09-47},
title = {Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification},
type = {Technical Report},
year = {2009}
}
Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that can be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We highlight the main characteristics and perform experiments to evaluate the effectiveness of these descriptors. To evaluate descriptors in classification tasks, we also proposed a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the best results.
|
Macário, Carla Geovana do Nascimento
Semantic Annotation of Geospatial Data (Anotação Semântica de Dados Geoespaciais) (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2009.
(
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{doMacario2009,
author = {Carla Geovana do Nascimento Macário},
date = {2009-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/cap2.pdf},
school = {Instituto de Computação - Unicamp},
title = {Semantic Annotation of Geospatial Data (Anotação Semântica de Dados Geoespaciais)},
year = {2009}
}
|
Nakai, Alan;
Madeira, Edmundo;
Buzato, Luiz Eduardo
Lab4WS: A Testbed for Web Services (conference)
2nd IEEE International Workshop on Internet and Distributed Computing Systems (IDCS'09),
2009.
(
BibTeX |
Tags:
Conference
)
@conference{Nakai2009,
author = {Alan Nakai and Edmundo Madeira and Luiz Eduardo Buzato},
booktitle = {2nd IEEE International Workshop on Internet and Distributed Computing Systems (IDCS'09)},
date = {2009-12-01},
keyword = {Conference},
title = {Lab4WS: A Testbed for Web Services},
year = {2009}
}
|
Gil, Fabiana Bellette
Serviço Web para Anotação de Dados Geográficos Vetoriais e sua Aplicação em Sistemas de Informação de Biodiversidade (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Gil2009,
author = {Fabiana Bellette Gil},
date = {2009-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Dissertacao_FabianaBelletteGil.pdf},
school = {Instituto de Computação - Unicamp},
title = {Serviço Web para Anotação de Dados Geográficos Vetoriais e sua Aplicação em Sistemas de Informação de Biodiversidade},
year = {2009}
}
|
Baccarin, Evandro
Automated Negotiation of Multi-party Contracts in Agricultural Supply Chains (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Baccarin2009,
abstract = {An agricultural supply chain comprises several kinds of actors that establish a complex net of relationships. These relationships may range from ad hoc and short lasting ones to highly structured and long lasting. This kind of chain has a few particularities like strict regulations and cultural influences, and presents a quite relevant economical and social importance. Contracts are the natural way of expressing relationships among members of a chain. Thus, the contracts and the activity of negotiating them are of major importance within a supply chain. This thesis proposes a model for agricultural supply chains that integrates seamlessly their main features, including their structure and their dynamics. Specifically, the thesis proposes a multi-party contract format and a negotiation protocol that builds such kind of contracts. Multi-party contracts are important in this context because several actors of a supply chain may build alliances comprising mutual rights and obligations. A set of bilateral contracts is not well-fitted for such a purpose. The thesis also presents an implementation of the negotiation protocol that builds on Web services and a workflow engine (YAWL). (Thesis written mostly in English).},
author = {Evandro Baccarin},
date = {2009-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tese-eb-ic09.pdf},
school = {Instituto de Computação - Unicamp},
title = {Automated Negotiation of Multi-party Contracts in Agricultural Supply Chains},
year = {2009}
}
An agricultural supply chain comprises several kinds of actors that establish a complex net of relationships. These relationships may range from ad hoc and short lasting ones to highly structured and long lasting. This kind of chain has a few particularities like strict regulations and cultural influences, and presents a quite relevant economical and social importance. Contracts are the natural way of expressing relationships among members of a chain. Thus, the contracts and the activity of negotiating them are of major importance within a supply chain. This thesis proposes a model for agricultural supply chains that integrates seamlessly their main features, including their structure and their dynamics. Specifically, the thesis proposes a multi-party contract format and a negotiation protocol that builds such kind of contracts. Multi-party contracts are important in this context because several actors of a supply chain may build alliances comprising mutual rights and obligations. A set of bilateral contracts is not well-fitted for such a purpose. The thesis also presents an implementation of the negotiation protocol that builds on Web services and a workflow engine (YAWL). (Thesis written mostly in English).
|
Sousa, Sidney Roberto de
A Semantic Approach to Describe Geospatial Resources (conference)
3rd International Workshop on Semantic and Conceptual Issues in GIS (SeCoGIS 2009),
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{deSousa2009,
abstract = {Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. This paper proposes an approach to attack this problem. This approach is based on semantic annotations that use geographic metadata and ontologies to describe heterogeneous geospatial data. Semantic annotations are RDF/XML files that rely on a FGDC metadata schema, filled with appropriate ontology terms, and stored in a XML database. The proposal is illustrated by a case study of semantic annotations of agricultural resources, using domain ontologies.},
author = {Sidney Roberto de Sousa},
booktitle = {3rd International Workshop on Semantic and Conceptual Issues in GIS (SeCoGIS 2009)},
date = {2009-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/finalVersionSeCoGISSidney2009.pdf},
pages = {327–336},
title = {A Semantic Approach to Describe Geospatial Resources},
volume = {LNCS 5833},
year = {2009}
}
Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. This paper proposes an approach to attack this problem. This approach is based on semantic annotations that use geographic metadata and ontologies to describe heterogeneous geospatial data. Semantic annotations are RDF/XML files that rely on a FGDC metadata schema, filled with appropriate ontology terms, and stored in a XML database. The proposal is illustrated by a case study of semantic annotations of agricultural resources, using domain ontologies.
|
Macário, Carla Geovana N.;
Sousa, Sidney Roberto de;
Medeiros, Claudia Bauzer
Annotating Geospatial Data based on its Semantics (conference)
17th ACM SIGSPATIAL Conference,
ACM,
2009.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2009,
author = {Carla Geovana N. Macário and Sidney Roberto de Sousa and Claudia Bauzer Medeiros},
booktitle = {17th ACM SIGSPATIAL Conference},
date = {2009-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/acm2009.pdf},
pages = {81-90},
publisher = {ACM},
title = {Annotating Geospatial Data based on its Semantics},
year = {2009}
}
|
Madeira, E. Bacarin E.R.M.;
Medeiros, C.M.B.;
Aalst, W.M.P. van der
SPICA's Multi-party Negotiation Protocol: Implementation using YAWL (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
TR-IC-09-44,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Bacarin2009b,
abstract = {A supply chain comprises several different kind of actors that interact either in an ad hoc fashion (e.g., an eventual deal) or in a previously well planned way. In the latter case, how the interactions develop is described in contracts that are agreed on before the interactions start. This agreement may involve several partners, thus a multi-party contract is better suited than a set of bi-lateral contracts. If one is willing to negotiate automatically such kind of contracts, an appropriate negotiation protocol should be at hand. However, the ones for bi-lateral contracts are not suitable for multi-party contracts, e.g., the way of achieving consensus when only two negotiators are haggling over some issue is quite different if there are several negotiators involved. In the first case, a simple bargain would suffice, but in the latter a ballot process is needed. This paper presents a negotiation protocol for electronic multi-party contracts which seamlessly combines several negotiation styles. It also elaborates on the main negotiation patterns the protocol allows for: bargain (for peer-to-peer negotiation), auction (when there is competition among the negotiators) and ballot (when the negotiation aims at consensus). Finally, it describes an implementation of this protocol based on Web services, and built on the YAWL Workflow Management System.},
author = {E. Bacarin E.R.M. Madeira and C.M.B. Medeiros and W.M.P. van der Aalst},
date = {2009-11-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tr-ic-09-44.pdf},
number = {TR-IC-09-44},
title = {SPICA's Multi-party Negotiation Protocol: Implementation using YAWL},
type = {Technical Report},
year = {2009}
}
A supply chain comprises several different kind of actors that interact either in an ad hoc fashion (e.g., an eventual deal) or in a previously well planned way. In the latter case, how the interactions develop is described in contracts that are agreed on before the interactions start. This agreement may involve several partners, thus a multi-party contract is better suited than a set of bi-lateral contracts. If one is willing to negotiate automatically such kind of contracts, an appropriate negotiation protocol should be at hand. However, the ones for bi-lateral contracts are not suitable for multi-party contracts, e.g., the way of achieving consensus when only two negotiators are haggling over some issue is quite different if there are several negotiators involved. In the first case, a simple bargain would suffice, but in the latter a ballot process is needed. This paper presents a negotiation protocol for electronic multi-party contracts which seamlessly combines several negotiation styles. It also elaborates on the main negotiation patterns the protocol allows for: bargain (for peer-to-peer negotiation), auction (when there is competition among the negotiators) and ballot (when the negotiation aims at consensus). Finally, it describes an implementation of this protocol based on Web services, and built on the YAWL Workflow Management System.
|
Sousa, Sidney Roberto de;
Medeiros, Claudia Bauzer
Management of Semantic Annotations of Data on Web for Agricultural Applications (conference)
VIII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{deSousa2009b,
abstract = {Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. This paper proposes an approach to attack this problem. This approach is based on semantic annotations that use geographic metadata and ontologies to describe heterogeneous geospatial data. Semantic annotations are RDF/XML files that rely on a FGDC metadata schema, filled with appropriate ontology terms, and stored in a XML database. The proposal is illustrated by a case study of semantic annotations of agricultural resources, using domain ontologies.},
author = {Sidney Roberto de Sousa and Claudia Bauzer Medeiros},
booktitle = {VIII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados},
date = {2009-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/57578_1.pdf},
note = {Accepted for presentation and publication},
title = {Management of Semantic Annotations of Data on Web for Agricultural Applications},
year = {2009}
}
Geographic information systems (GIS) are increasingly using geospatial data from the Web to produce geographic information. One big challenge is to find the relevant data, which often is based on keywords or even file names. However, these approaches lack semantics. Thus, it is necessary to provide mechanisms to prepare data to help retrieval of semantically relevant data. This paper proposes an approach to attack this problem. This approach is based on semantic annotations that use geographic metadata and ontologies to describe heterogeneous geospatial data. Semantic annotations are RDF/XML files that rely on a FGDC metadata schema, filled with appropriate ontology terms, and stored in a XML database. The proposal is illustrated by a case study of semantic annotations of agricultural resources, using domain ontologies.
|
Spina, Thiago V.;
Montoya-Zegarra, Javier A.;
Andrijauskas, Fábio;
Faria, Fábio A.;
Zampieri, Carlos E. A.;
Pinto-Cáceres, Sheila M.;
Carvalho, Tiago J. de;
Falcão, Alexandre X.
A comparative study among pattern classifiers in interactive image segmentation (conference)
SIBGRAPI,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Spina2009,
abstract = {Edition of natural images usually asks for considerable user involvement, being segmentation one of the main challenges. This paper describes a unified graph-based framework for fast, precise and accurate interactive image segmentation. The method divides segmentation into object recognition, enhancement and extraction. Recognition is done by the user when markers are selected inside and outside the object. Enhancement increases the dissimilarities between object and background and extraction separates them. Enhancement is done by a fuzzy pixel classifier and it has a great impact in the number of markers required for extraction. In view of minimizing user involvement, we focus this paper on a comparative study among popular classifiers for enhancement, conducting experiments with several natural images and seven users.},
author = {Thiago V. Spina and Javier A. Montoya-Zegarra and Fábio Andrijauskas and Fábio A. Faria and Carlos E. A. Zampieri and Sheila M. Pinto-Cáceres and Tiago J. de Carvalho and Alexandre X. Falcão},
booktitle = {SIBGRAPI},
date = {2009-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/58129.pdf},
title = {A comparative study among pattern classifiers in interactive image segmentation},
year = {2009}
}
Edition of natural images usually asks for considerable user involvement, being segmentation one of the main challenges. This paper describes a unified graph-based framework for fast, precise and accurate interactive image segmentation. The method divides segmentation into object recognition, enhancement and extraction. Recognition is done by the user when markers are selected inside and outside the object. Enhancement increases the dissimilarities between object and background and extraction separates them. Enhancement is done by a fuzzy pixel classifier and it has a great impact in the number of markers required for extraction. In view of minimizing user involvement, we focus this paper on a comparative study among popular classifiers for enhancement, conducting experiments with several natural images and seven users.
|
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
SciFrame: a conceptual framework to describe data sharing in e-Science (conference)
XXIV SBBD - III eScience workshop,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Senra2009,
abstract = {The first SBC Challenge aims to provide solutions to the problem of managing large volumes of multimedia data. Our goal is to contribute towards research in these directions by discussing the problems involved in sharing scientific digital information. First, we propose a conceptual framework (SciFrame) that helps to understand the main issues involved and to integrate related research efforts. Second, we use a real case study to point out problems which are particular to scientific data management. Finally, we describe our case study using SciFrame.},
author = {Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
booktitle = {XXIV SBBD - III eScience workshop},
date = {2009-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbbd2009.pdf},
title = {SciFrame: a conceptual framework to describe data sharing in e-Science},
year = {2009}
}
The first SBC Challenge aims to provide solutions to the problem of managing large volumes of multimedia data. Our goal is to contribute towards research in these directions by discussing the problems involved in sharing scientific digital information. First, we propose a conceptual framework (SciFrame) that helps to understand the main issues involved and to integrate related research efforts. Second, we use a real case study to point out problems which are particular to scientific data management. Finally, we describe our case study using SciFrame.
|
Vilar, Bruno Siqueira Campos Mendonça
Processamento de Consultas Baseado em Ontologias para Sistemas de Biodiversidade (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Vilar2009,
abstract = {Sistemas de informação de biodiversidade lidam com um conjunto heterogêneo de informações providas por diferentes grupos de pesquisa. A diversificação pode ocorrer com relação às espécies estudadas, à estruturação das informações coletadas, ao local de estudo, metodologias de trabalho ou objetivos dos pesquisadores, dentre outros fatores. Esta heterogeneidade de dados, usuários e procedimentos dificulta o reuso e o compartilhamento de informações. Este trabalho contribui para diminuir tal obstáculo, melhorando o processo de consulta às informações em sistemas de biodiversidade. Para tanto, propõe um mecanismo de expansão de consultas que pré-processa uma consulta de usuário (cientista) agregando informações adicionais, provenientes de ontologias, para aproximar o resultado da intenção do usuário. Este mecanismo é baseado em serviços Web e foi implementado e testado usados dados e casos de uso reais.},
author = {Bruno Siqueira Campos Mendonça Vilar},
date = {2009-09-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DissertacaoBrunoLIS.pdf},
school = {Instituto de Computação - Unicamp},
title = {Processamento de Consultas Baseado em Ontologias para Sistemas de Biodiversidade},
year = {2009}
}
Sistemas de informação de biodiversidade lidam com um conjunto heterogêneo de informações providas por diferentes grupos de pesquisa. A diversificação pode ocorrer com relação às espécies estudadas, à estruturação das informações coletadas, ao local de estudo, metodologias de trabalho ou objetivos dos pesquisadores, dentre outros fatores. Esta heterogeneidade de dados, usuários e procedimentos dificulta o reuso e o compartilhamento de informações. Este trabalho contribui para diminuir tal obstáculo, melhorando o processo de consulta às informações em sistemas de biodiversidade. Para tanto, propõe um mecanismo de expansão de consultas que pré-processa uma consulta de usuário (cientista) agregando informações adicionais, provenientes de ontologias, para aproximar o resultado da intenção do usuário. Este mecanismo é baseado em serviços Web e foi implementado e testado usados dados e casos de uso reais.
|
Murthy, Uma;
Fox, Edward A.;
Chen, Yinlin;
Hallerman, Eric;
Torres, Ricardo da Silva;
Ramos, Evandro J.;
Falcão, Tiago R. C.
Superimposed image description and retrieval for fish species identification (conference)
ECDL '09: Proc. of the 13th European conference on Research and Advanced Technology for Digital Libraries, Corfu, Greece,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Murthy2009,
abstract = {Fish species identification is critical to the study of fish ecology and management of fisheries. Traditionally, dichotomous keys are used for fish identification. The keys consist of questions about the observed specimen. Answers to these questions lead to more questions till the reader identifies the specimen. However, such keys are incapable of adapting or changing to meet different fish identification approaches, and often do not focus upon distinguishing characteristics favored by many field ecologists and more user-friendly field guides. This makes learning to identify fish diffcult for Ichthyology students. Students usually supplement the use of the key with other methods such as making personal notes, drawings, annotated fish images, and more recently, fish information websites, such as Fishbase. Although these approaches provide useful additional content, it is dispersed across heterogeneous sources and can be tedious to access. Also, most of the existing electronic tools have limited support to manage user created content, especially that related to parts of images such as markings on drawings and images and associated notes. We present SuperIDR, a superimposed image description and retrieval tool, developed to address some of these issues. It allows users to associate parts of images with text annotations. Later, they can retrieve images, parts of images, annotations, and image descriptions through text- and content-based image retrieval. We evaluated SuperIDR in an undergraduate Ichthyology class as an aid to fish species identification and found that the use of SuperIDR yielded a higher likelihood of success in species identification than using traditional methods, including the dichotomous key,fish web sites, notes, etc.},
author = {Uma Murthy and Edward A. Fox and Yinlin Chen and Eric Hallerman and Ricardo da Silva Torres and Evandro J. Ramos and Tiago R. C. Falcão},
booktitle = {ECDL '09: Proc. of the 13th European conference on Research and Advanced Technology for Digital Libraries, Corfu, Greece},
date = {2009-09-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/umurthy2009_ecdl_superimposed_image.pdf},
title = {Superimposed image description and retrieval for fish species identification},
year = {2009}
}
Fish species identification is critical to the study of fish ecology and management of fisheries. Traditionally, dichotomous keys are used for fish identification. The keys consist of questions about the observed specimen. Answers to these questions lead to more questions till the reader identifies the specimen. However, such keys are incapable of adapting or changing to meet different fish identification approaches, and often do not focus upon distinguishing characteristics favored by many field ecologists and more user-friendly field guides. This makes learning to identify fish diffcult for Ichthyology students. Students usually supplement the use of the key with other methods such as making personal notes, drawings, annotated fish images, and more recently, fish information websites, such as Fishbase. Although these approaches provide useful additional content, it is dispersed across heterogeneous sources and can be tedious to access. Also, most of the existing electronic tools have limited support to manage user created content, especially that related to parts of images such as markings on drawings and images and associated notes. We present SuperIDR, a superimposed image description and retrieval tool, developed to address some of these issues. It allows users to associate parts of images with text annotations. Later, they can retrieve images, parts of images, annotations, and image descriptions through text- and content-based image retrieval. We evaluated SuperIDR in an undergraduate Ichthyology class as an aid to fish species identification and found that the use of SuperIDR yielded a higher likelihood of success in species identification than using traditional methods, including the dichotomous key,fish web sites, notes, etc.
|
Kozievitch, Nádia P.;
Torres, Ricardo da Silva;
Falcão, Thiago;
Ramos, Evandro;
Andrade, Felipe;
Allegretti, Silmara Marques;
Ueta, Marlene Tiduko;
Madi, Rubens Riscala;
Murthy, Uma;
Fox, Eduard A.;
Chen, Yinlin;
Hallerman, Eric
Evaluation of a Tablet PC image annotation and retrieval tool in the parasitology domain. (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-09-23,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Kozievitch2009b,
abstract = {The project Deployment and Assessment of an Image Annotation and Retrieval Tool has the objective of specifying and implementing an application for image support annotation and search (based on a textual and a visual description) in the biodiversity domain. This technical report presents the activities related to the use of the tablet PC tool in the parasitology domain at Unicamp. The objective of this tool is to help the comparison of morphological characteristics among different species. The report is divided into activities accomplished, application setup and specific features, followed by experimental results and conclusion. Preliminary results showed that students regarded the tool as being very useful, contributing as an alternative learning approach.},
author = {Nádia P. Kozievitch and Ricardo da Silva Torres and Thiago Falcão and Evandro Ramos and Felipe Andrade and Silmara Marques Allegretti and Marlene Tiduko Ueta and Rubens Riscala Madi and Uma Murthy and Eduard A. Fox and Yinlin Chen and Eric Hallerman},
date = {2009-07-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/09-23.pdf},
number = {IC-09-23},
title = {Evaluation of a Tablet PC image annotation and retrieval tool in the parasitology domain.},
type = {Technical Report},
year = {2009}
}
The project Deployment and Assessment of an Image Annotation and Retrieval Tool has the objective of specifying and implementing an application for image support annotation and search (based on a textual and a visual description) in the biodiversity domain. This technical report presents the activities related to the use of the tablet PC tool in the parasitology domain at Unicamp. The objective of this tool is to help the comparison of morphological characteristics among different species. The report is divided into activities accomplished, application setup and specific features, followed by experimental results and conclusion. Preliminary results showed that students regarded the tool as being very useful, contributing as an alternative learning approach.
|
Filho, Arnaldo Francisco Vitaliano
Mechanisms for Semantic Annotation of Scientific Workflows (Mecanismos de Anotação Semântica de Workflows Cientificos) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Filho2009,
abstract = {The sharing of information, processes and models of experiments is increasing among scientists from many organizations and areas of knowledge, and thus there is a need for supply mechanisms of workflow discovery. Many of these models are described as scientific workflows. However, there is no default specification to describe them, which complicates the reuse of workflows and components that are available. This thesis contributes to solving this problem by presenting the following results: analysis of issues related to the sharing and cooperative design of scientific workflows on the Web; analysis of semantic aspects and metadata related to workflows, the development of a Web-based workflow editor, which incorporates our semantic annotation model for scientific workflows. Given these factors, this work creates the basis to allow the discovery, reuse and sharing of scientific workflows in the Web.},
author = {Arnaldo Francisco Vitaliano Filho},
date = {2009-07-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DissertacaoArnaldoVitaliano.pdf},
school = {Instituto de Computação - Unicamp},
title = {Mechanisms for Semantic Annotation of Scientific Workflows (Mecanismos de Anotação Semântica de Workflows Cientificos)},
year = {2009}
}
The sharing of information, processes and models of experiments is increasing among scientists from many organizations and areas of knowledge, and thus there is a need for supply mechanisms of workflow discovery. Many of these models are described as scientific workflows. However, there is no default specification to describe them, which complicates the reuse of workflows and components that are available. This thesis contributes to solving this problem by presenting the following results: analysis of issues related to the sharing and cooperative design of scientific workflows on the Web; analysis of semantic aspects and metadata related to workflows, the development of a Web-based workflow editor, which incorporates our semantic annotation model for scientific workflows. Given these factors, this work creates the basis to allow the discovery, reuse and sharing of scientific workflows in the Web.
|
Macário, C. G. N;
Medeiros, C. B.
A Framework for Semantic Annotation of Geospatial Data for Agriculture (article)
Int. J. Metadata, Semantics and Ontology - Special Issue on "Agricultural Metadata and Semantics",
1/2,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Macario2009b,
abstract = {The Web is a huge repository of geospatial information. Efficient retrieval of this information is a key factor in planning and decision-making in many domains, including agriculture. However, standards for data annotation and exchange enable only syntactic interoperability, while semantic heterogeneity presents challenges. This work describes a framework that tackles interoperability problems via semantic annotations, which are based on multiple ontologies. The framework is being developed within a project to support agricultural planning in Brazil. The paper discusses design and implementation issues using a real case study, provides an overview of annotation mechanisms and identifies requirements for annotating agricultural data.},
author = {C. G. N Macário and C. B. Medeiros},
date = {2009-06-01},
journal = {Int. J. Metadata, Semantics and Ontology - Special Issue on "Agricultural Metadata and Semantics"},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/IJMSO-41212-Macario-and-Medeiros-2.pdf},
number = {1/2},
pages = {118-132},
title = {A Framework for Semantic Annotation of Geospatial Data for Agriculture},
volume = {4},
year = {2009}
}
The Web is a huge repository of geospatial information. Efficient retrieval of this information is a key factor in planning and decision-making in many domains, including agriculture. However, standards for data annotation and exchange enable only syntactic interoperability, while semantic heterogeneity presents challenges. This work describes a framework that tackles interoperability problems via semantic annotations, which are based on multiple ontologies. The framework is being developed within a project to support agricultural planning in Brazil. The paper discusses design and implementation issues using a real case study, provides an overview of annotation mechanisms and identifies requirements for annotating agricultural data.
|
Kozievitch, Nádia Puchalski
Complex Objects in Digital Libraries (conference)
JCDL ´09 - Proc. of Joint Conference on Digital Libraries,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Kozievitch2009,
abstract = {There are several applications which need support for complex objects, such as new mechanisms for managing data, creating references, links and annotations; clustering or organizing complex digital objects and their components. At this work we present a research proposal to address these issues. The objective is to specify and implement a formal and unified framework to manage multimodal complex objects in digital libraries, using the 5S formalism and Digital Content Component (DCC) aggregation.},
author = {Nádia Puchalski Kozievitch},
booktitle = {JCDL ´09 - Proc. of Joint Conference on Digital Libraries},
date = {2009-06-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/consortium_nadia.pdf},
title = {Complex Objects in Digital Libraries},
year = {2009}
}
There are several applications which need support for complex objects, such as new mechanisms for managing data, creating references, links and annotations; clustering or organizing complex digital objects and their components. At this work we present a research proposal to address these issues. The objective is to specify and implement a formal and unified framework to manage multimodal complex objects in digital libraries, using the 5S formalism and Digital Content Component (DCC) aggregation.
|
Li, Lin Tzy;
Torres, Ricardo da Silva
Revisitando os desafios da recuperação de informação geográfica na Web (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-09-18,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Li2009,
abstract = {Há uma grande quantidade de informação na Web sobre entidades geográficas e grande interesse em localizá-la em mapas. Entretanto, os mecanismos de busca na Web ainda não suportam em uma única ferramenta buscas que envolvam relações espaciais, pois em geral a consulta é processada levando-se em conta apenas as palavras-chaves usadas na consulta. Este artigo faz uma breve revisão da área de Recuperação de Informação Geográfica (GIR) e uma releitura de desafios e oportunidades de pesquisa da área a partir da proposta de uma arquitetura para buscas Web envolvendo relacionamento espacial entre entidades geográficas e uma implementação inicial dela.
The geographic information is part of people's daily life. There is a huge amount of information on the Web about or related to geographic entities and people are interested in localizing them on maps. Nevertheless, the conventional Web search engines, which are keywords-driven mechanisms, do not support queries involving spatial relationships between geographic entities. This paper revises the Geographic Information Retrieval (GIR) area and restates its research challenges and opportunities, based on a proposed architecture for executing Web queries involving spatial relationships and an initial implementation of that.},
author = {Lin Tzy Li and Ricardo da Silva Torres},
date = {2009-05-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/RelTec-IC-LTL1.pdf},
number = {IC-09-18},
title = {Revisitando os desafios da recuperação de informação geográfica na Web},
type = {Technical Report},
year = {2009}
}
Há uma grande quantidade de informação na Web sobre entidades geográficas e grande interesse em localizá-la em mapas. Entretanto, os mecanismos de busca na Web ainda não suportam em uma única ferramenta buscas que envolvam relações espaciais, pois em geral a consulta é processada levando-se em conta apenas as palavras-chaves usadas na consulta. Este artigo faz uma breve revisão da área de Recuperação de Informação Geográfica (GIR) e uma releitura de desafios e oportunidades de pesquisa da área a partir da proposta de uma arquitetura para buscas Web envolvendo relacionamento espacial entre entidades geográficas e uma implementação inicial dela.
The geographic information is part of people's daily life. There is a huge amount of information on the Web about or related to geographic entities and people are interested in localizing them on maps. Nevertheless, the conventional Web search engines, which are keywords-driven mechanisms, do not support queries involving spatial relationships between geographic entities. This paper revises the Geographic Information Retrieval (GIR) area and restates its research challenges and opportunities, based on a proposed architecture for executing Web queries involving spatial relationships and an initial implementation of that.
|
Figueiredo, Mauricio Augusto
Managing the Quality of Products in a Supply Chain (Gerenciamento de Regras de Qualidade de Produtos em Cadeias Produtivas) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Figueiredo2009,
abstract = {Cadeias produtivas têm se tornado cada vez mais dependentes de sistemas computacionais. Além dos desafios científicos, há várias conseqüências econômicas. Esta dissertação trata de mecanismos de gerenciamento de regras que especificam a qualidade de produtos em cadeias produtivas sob dois aspectos: (i) a especificação e armazenamento destas regras e (ii) a análise dos eventos ocorridos na cadeia face a tais restrições. A dissertação parte de um modelo de rastreabilidade para cadeias produtivas agrícolas desenvolvido na UNICAMP. As regras de qualidade gerenciadas definem condições atribuídas a produtos de forma que eles possam ser consumidos. A verificação de regras é baseada na análise de variáveis consideradas críticas para a garantia de qualidade, que são monitoradas por sensores. Portanto, esta pesquisa combina trabalhos em gerenciamento de dados de sensores, bancos de dados ativos e restrições de integridade. As principais contribuições são: um estudo detalhado sobre rastreabilidade associada a regras de qualidade, um modelo para gerenciar a especificação, aplicação e análise dessas regras e um protótipo para validar a arquitetura. O protótipo é baseado em serviços Web e disseminação de eventos. Os estudos de caso são baseados em problemas na área de agricultura.},
author = {Mauricio Augusto Figueiredo},
date = {2009-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Versao-nao-assinada.pdf},
school = {Instituto de Computação - Unicamp},
title = {Managing the Quality of Products in a Supply Chain (Gerenciamento de Regras de Qualidade de Produtos em Cadeias Produtivas)},
year = {2009}
}
Cadeias produtivas têm se tornado cada vez mais dependentes de sistemas computacionais. Além dos desafios científicos, há várias conseqüências econômicas. Esta dissertação trata de mecanismos de gerenciamento de regras que especificam a qualidade de produtos em cadeias produtivas sob dois aspectos: (i) a especificação e armazenamento destas regras e (ii) a análise dos eventos ocorridos na cadeia face a tais restrições. A dissertação parte de um modelo de rastreabilidade para cadeias produtivas agrícolas desenvolvido na UNICAMP. As regras de qualidade gerenciadas definem condições atribuídas a produtos de forma que eles possam ser consumidos. A verificação de regras é baseada na análise de variáveis consideradas críticas para a garantia de qualidade, que são monitoradas por sensores. Portanto, esta pesquisa combina trabalhos em gerenciamento de dados de sensores, bancos de dados ativos e restrições de integridade. As principais contribuições são: um estudo detalhado sobre rastreabilidade associada a regras de qualidade, um modelo para gerenciar a especificação, aplicação e análise dessas regras e um protótipo para validar a arquitetura. O protótipo é baseado em serviços Web e disseminação de eventos. Os estudos de caso são baseados em problemas na área de agricultura.
|
Bacarin, Evandro;
Madeira, Edmundo R.M.;
Medeiros, Claudia Bauzer
Assembling and Managing Virtual Organizations out of Multi-party Contracts (conference)
11th International Conference on Enterprise Information Systems,
Springer,
2009.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Bacarin2009,
abstract = {Assembling virtual organizations is a complex process, which can be modeled and managed by means of a multi-party contract. Such a contract must encompass seeking consensus among parties in some issues, while simultaneously allowing for competition in others. Present solutions in contract negotiation are not satisfactory because they do not accommodate such a variety of needs and negotiation protocols. This paper shows our solution to this problem, discussing how our SPICA negotiation protocol can be used to build up virtual organizations. It assesses the effectiveness of our approach and discusses the protocol’s implementation.},
author = {Evandro Bacarin and Edmundo R.M. Madeira and Claudia Bauzer Medeiros},
booktitle = {11th International Conference on Enterprise Information Systems},
date = {2009-05-01},
keyword = {Conference},
pages = {758-769},
publisher = {Springer},
title = {Assembling and Managing Virtual Organizations out of Multi-party Contracts},
volume = {24},
year = {2009}
}
Assembling virtual organizations is a complex process, which can be modeled and managed by means of a multi-party contract. Such a contract must encompass seeking consensus among parties in some issues, while simultaneously allowing for competition in others. Present solutions in contract negotiation are not satisfactory because they do not accommodate such a variety of needs and negotiation protocols. This paper shows our solution to this problem, discussing how our SPICA negotiation protocol can be used to build up virtual organizations. It assesses the effectiveness of our approach and discusses the protocol’s implementation.
|
Santos, Jefersson Alex dos;
Lamparelli, Rubens Augusto;
Torres, Ricardo da Silva
Using Relevance Feedback for Classifying Remote Sensing Images (conference)
Proceedings of Brazilian Remote Sensing Symposium,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{dosSantos2009b,
abstract = {This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions which are of interest. Furthermore, a genetic programming approach is used to learn user preferences and combine image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective and suitable for image classification tasks.},
author = {Jefersson Alex dos Santos and Rubens Augusto Lamparelli and Ricardo da Silva Torres},
booktitle = {Proceedings of Brazilian Remote Sensing Symposium},
date = {2009-04-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbsr2009.pdf},
title = {Using Relevance Feedback for Classifying Remote Sensing Images},
year = {2009}
}
This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions which are of interest. Furthermore, a genetic programming approach is used to learn user preferences and combine image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective and suitable for image classification tasks.
|
Malaverri, Joana E. Gonzales
Um Serviço de Gerenciamento de Coletas para Sistemas de Informação de Biodiversidade (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Malaverri2009,
abstract = {Biodiversity research requires correlations of data on living beings and their habitats. Such correlations can be of different types, considering factors such as spatial relationships or environmental descriptions (e.g., description of habitat and ecossystems). Biodiversity information systems are complex pieces of software that allow researchers to perform these kinds of analysis. The complexity of these systems varies with the data used, the target users, and the environment where the systems are executed. One of the problems to be faced, especially on the Web, is the heterogeneity of the data aggravated by the diversity of user vocabularies. This research contributes to solve this problem by presenting a database model that organizes the biodiversity information using consensual data standards. The proposed model combines information collected in the field with that from museum data catalogues. The model was specified with the assistance of biologists and ecologists. The database was encapsulated in a Web service to ensure transparency in using, accessing and recovering the information. The service is invoked by client applications. The database and service were tested and validated using real data, provided by the BioCore project partners. BioCore is a research project that involves computer and biology researchers from UNICAMP and USP.},
author = {Joana E. Gonzales Malaverri},
date = {2009-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/teseJoana.pdf},
school = {Instituto de Computação - Unicamp},
title = {Um Serviço de Gerenciamento de Coletas para Sistemas de Informação de Biodiversidade},
year = {2009}
}
Biodiversity research requires correlations of data on living beings and their habitats. Such correlations can be of different types, considering factors such as spatial relationships or environmental descriptions (e.g., description of habitat and ecossystems). Biodiversity information systems are complex pieces of software that allow researchers to perform these kinds of analysis. The complexity of these systems varies with the data used, the target users, and the environment where the systems are executed. One of the problems to be faced, especially on the Web, is the heterogeneity of the data aggravated by the diversity of user vocabularies. This research contributes to solve this problem by presenting a database model that organizes the biodiversity information using consensual data standards. The proposed model combines information collected in the field with that from museum data catalogues. The model was specified with the assistance of biologists and ecologists. The database was encapsulated in a Web service to ensure transparency in using, accessing and recovering the information. The service is invoked by client applications. The database and service were tested and validated using real data, provided by the BioCore project partners. BioCore is a research project that involves computer and biology researchers from UNICAMP and USP.
|
Penatti, Otávio Augusto Bizetto
Estudo comparativo de descritores para recuperação de imagens por conteúdo na Web (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Penatti2009,
abstract = {A crescente quantidade de imagens geradas e disponibilizadas atualmente tem feito aumentar a necessidade de criação de sistemas de busca para este tipo de informação. Um método promissor para a realização da busca de imagens é a busca por conteúdo. Este tipo de abordagem considera o conteúdo visual das imagens, como cor, textura e forma de objetos, para indexação e recuperação. A busca de imagens por conteúdo tem como componente principal o descritor de imagens. O descritor de imagens é responsável por extrair propriedades visuais das imagens e armazená-las em vetores de características. Dados dois vetores de características, o descritor compara-os e retorna um valor de distância. Este valor quantifica a diferença entre as imagens representadas pelos vetores. Em um sistema de busca de imagens por conteúdo, a distância calculada pelo descritor de imagens é usada para ordenar as imagens da base em relação a uma determinada imagem de consulta. Esta dissertação realiza um estudo comparativo de descritores de imagens considerando a Web como cenário de uso. Este cenário apresenta uma quantidade muito grande de imagens e de conteúdo bastante heterogêneo. O estudo comparativo realizado nesta dissertação é feito em duas abordagens. A primeira delas considera a complexidade assintótica dos algoritmos de extração de vetores de características e das funções de distância dos descritores, os tamanhos dos vetores de características gerados pelos descritores e o ambiente no qual cada descritor foi validado originalmente. A segunda abordagem compara os descritores em experimentos práticos em quatro bases de imagens diferentes. Os descritores são avaliados segundo tempo de extração, tempo para cálculos de distância, requisitos de armazenamento e eficácia. São comparados descritores de cor, textura e forma. Os experimentos são realizados com cada tipo de descritor independentemente e, baseado nestes resultados, um conjunto de descritores é avaliado em uma base com mais de 230 mil imagens heterogêneas, que reflete o conteúdo encontrado na Web. A avaliação de eficácia dos descritores na base de imagens heterogêneas é realizada por meio de experimentos com usuários reais. Esta dissertação também apresenta uma ferramenta para a realização automatizada de testes comparativos entre descritores de imagens.},
author = {Otávio Augusto Bizetto Penatti},
date = {2009-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacaoFinal.pdf},
school = {Instituto de Computação - Unicamp},
title = {Estudo comparativo de descritores para recuperação de imagens por conteúdo na Web},
year = {2009}
}
A crescente quantidade de imagens geradas e disponibilizadas atualmente tem feito aumentar a necessidade de criação de sistemas de busca para este tipo de informação. Um método promissor para a realização da busca de imagens é a busca por conteúdo. Este tipo de abordagem considera o conteúdo visual das imagens, como cor, textura e forma de objetos, para indexação e recuperação. A busca de imagens por conteúdo tem como componente principal o descritor de imagens. O descritor de imagens é responsável por extrair propriedades visuais das imagens e armazená-las em vetores de características. Dados dois vetores de características, o descritor compara-os e retorna um valor de distância. Este valor quantifica a diferença entre as imagens representadas pelos vetores. Em um sistema de busca de imagens por conteúdo, a distância calculada pelo descritor de imagens é usada para ordenar as imagens da base em relação a uma determinada imagem de consulta. Esta dissertação realiza um estudo comparativo de descritores de imagens considerando a Web como cenário de uso. Este cenário apresenta uma quantidade muito grande de imagens e de conteúdo bastante heterogêneo. O estudo comparativo realizado nesta dissertação é feito em duas abordagens. A primeira delas considera a complexidade assintótica dos algoritmos de extração de vetores de características e das funções de distância dos descritores, os tamanhos dos vetores de características gerados pelos descritores e o ambiente no qual cada descritor foi validado originalmente. A segunda abordagem compara os descritores em experimentos práticos em quatro bases de imagens diferentes. Os descritores são avaliados segundo tempo de extração, tempo para cálculos de distância, requisitos de armazenamento e eficácia. São comparados descritores de cor, textura e forma. Os experimentos são realizados com cada tipo de descritor independentemente e, baseado nestes resultados, um conjunto de descritores é avaliado em uma base com mais de 230 mil imagens heterogêneas, que reflete o conteúdo encontrado na Web. A avaliação de eficácia dos descritores na base de imagens heterogêneas é realizada por meio de experimentos com usuários reais. Esta dissertação também apresenta uma ferramenta para a realização automatizada de testes comparativos entre descritores de imagens.
|
Malaverri, Joana G.;
Vilar, Bruno;
Medeiros, Claudia Bauzer
A Tool Based on Web Services to Query Biodiversity Information (conference)
5th International Conference on Web Information Systems and Technologies (Webist 2009),
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Malaverri2009b,
abstract = {Biodiversity Information Systems are complex software systems that present data management solutions to allow researchers to analyze species and their interactions. The complexity of these systems varies with the data handled, users targeted and environment in which they are executed. An open problem to be faced especially in a Web environment is data heterogeneity, and the diversity of user vocabularies and needs. This hampers query processing. This paper presents a tool based onWeb services to expand and process biodiversity queries using ontology information. This solution relies on a new database organization, also described here, which combines in a single model data collected in the field with data found in archival sources. This tool is being tested using real case studies, within a large Web-based biodiversity system.},
author = {Joana G. Malaverri and Bruno Vilar and Claudia Bauzer Medeiros},
booktitle = {5th International Conference on Web Information Systems and Technologies (Webist 2009)},
date = {2009-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/webist2009.pdf},
title = {A Tool Based on Web Services to Query Biodiversity Information},
year = {2009}
}
Biodiversity Information Systems are complex software systems that present data management solutions to allow researchers to analyze species and their interactions. The complexity of these systems varies with the data handled, users targeted and environment in which they are executed. An open problem to be faced especially in a Web environment is data heterogeneity, and the diversity of user vocabularies and needs. This hampers query processing. This paper presents a tool based onWeb services to expand and process biodiversity queries using ontology information. This solution relies on a new database organization, also described here, which combines in a single model data collected in the field with data found in archival sources. This tool is being tested using real case studies, within a large Web-based biodiversity system.
|
Macário, Carla Geovana N.;
Medeiros, Claudia Bauzer
The Geospatial Semantic Web: are GIS Catalogs prepared for this? (conference)
5th International Conference on Web Information Systems and Technologies (Webist 2009),
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2009b,
abstract = {Geospatial information catalogs are complex infrastructures that store and publish geographic information. They are an important part of Geographic Information Systems (GIS), systems that manage geospatial data for a wide variety of application domains. To be useful, a catalog must efficiently support discovery and retrieval of geospatial information, working as a key component for planning and decision-making in a variety of domains. Catalogs use standards to support data interoperability. However, the simple adoption of standards and specifications for geospatial data description enables only syntactic interoperability. Semantic heterogeneity still presents challenges for the so-called Geospatial Semantic Web. This work discusses some features that GIS catalogs should have, focusing in semantic issues. We tested some existing and well known catalogs, comparing them by means of these features. Based on this comparison, we identified some open issues that should be addressed considering advanced Geospatial applications on the Web.},
author = {Carla Geovana N. Macário and Claudia Bauzer Medeiros},
booktitle = {5th International Conference on Web Information Systems and Technologies (Webist 2009)},
date = {2009-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/webist09.pdf},
pages = {335-340},
title = {The Geospatial Semantic Web: are GIS Catalogs prepared for this?},
year = {2009}
}
Geospatial information catalogs are complex infrastructures that store and publish geographic information. They are an important part of Geographic Information Systems (GIS), systems that manage geospatial data for a wide variety of application domains. To be useful, a catalog must efficiently support discovery and retrieval of geospatial information, working as a key component for planning and decision-making in a variety of domains. Catalogs use standards to support data interoperability. However, the simple adoption of standards and specifications for geospatial data description enables only syntactic interoperability. Semantic heterogeneity still presents challenges for the so-called Geospatial Semantic Web. This work discusses some features that GIS catalogs should have, focusing in semantic issues. We tested some existing and well known catalogs, comparing them by means of these features. Based on this comparison, we identified some open issues that should be addressed considering advanced Geospatial applications on the Web.
|
Santos, Jefersson Alex dos
Reconhecimento Semi-automático e Vetorização de Regiões em Imagens de Sensoriamento Remoto (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{dosSantos2009,
abstract = {O uso de imagens de sensoriamento remoto (ISRs) como fonte de informação em aplicações voltadas para o agronegócio é bastante comum. Nessas aplicações, saber como é a ocupação espacial é fundamental. Entretanto, reconhecer e diferenciar regiões de culturas agrícolas em ISRs ainda não é uma tarefa trivial. Embora existam métodos automáticos propostos para isso, os usuários preferem muitas vezes fazer o reconhecimento manualmente. Isso acontece porque tais métodos normalmente são feitos para resolver problemas específicos, ou quando são de propósito geral, não produzem resultados satisfatórios fazendo com que, invariavelmente, o usuário tenha que revisar os resultados manualmente. A pesquisa realizada objetivou a especificação e implementação parcial de um sistema para o reconhecimento semi-automático e vetorização de regiões em imagens de sensoriamento remoto. Para isso, foi usada uma estratégia interativa, chamada realimentação de relevância, que se baseia no fato de o sistema de classificação poder aprender quais são as regiões de interesse utilizando indicações de relevância feitas pelo usuário do sistema ao longo de iterações. A idéia é utilizar descritores de imagens para codificarinformações espectrais e de textura de partições da imagens, utilizar realimentação de relevância com Programação genética (PG) para combinar as características dos descritores. PG é uma técnica de aprendizado de máquina baseada na teoria da evolução. As principais contribuições desse trabalho são: estudo comparativo de técnicas de vetorização de imagens; adaptação do modelo de recuperação de imagens por conteúdo proposto recentemente para realização de realimentação de relevância usando regiões de imagem; adaptação do modelo de realimentação de relevância para o reconhecimento de regiões em ISRs; implementação parcial de um sistema de reconhecimento semi-automático e vetorização de regiões em ISRs; proposta de metodologia de validação do sistema desenvolvido.},
author = {Jefersson Alex dos Santos},
date = {2009-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SantosJeferssonAlexdos.pdf},
school = {Instituto de Computação - Unicamp},
title = {Reconhecimento Semi-automático e Vetorização de Regiões em Imagens de Sensoriamento Remoto},
year = {2009}
}
O uso de imagens de sensoriamento remoto (ISRs) como fonte de informação em aplicações voltadas para o agronegócio é bastante comum. Nessas aplicações, saber como é a ocupação espacial é fundamental. Entretanto, reconhecer e diferenciar regiões de culturas agrícolas em ISRs ainda não é uma tarefa trivial. Embora existam métodos automáticos propostos para isso, os usuários preferem muitas vezes fazer o reconhecimento manualmente. Isso acontece porque tais métodos normalmente são feitos para resolver problemas específicos, ou quando são de propósito geral, não produzem resultados satisfatórios fazendo com que, invariavelmente, o usuário tenha que revisar os resultados manualmente. A pesquisa realizada objetivou a especificação e implementação parcial de um sistema para o reconhecimento semi-automático e vetorização de regiões em imagens de sensoriamento remoto. Para isso, foi usada uma estratégia interativa, chamada realimentação de relevância, que se baseia no fato de o sistema de classificação poder aprender quais são as regiões de interesse utilizando indicações de relevância feitas pelo usuário do sistema ao longo de iterações. A idéia é utilizar descritores de imagens para codificarinformações espectrais e de textura de partições da imagens, utilizar realimentação de relevância com Programação genética (PG) para combinar as características dos descritores. PG é uma técnica de aprendizado de máquina baseada na teoria da evolução. As principais contribuições desse trabalho são: estudo comparativo de técnicas de vetorização de imagens; adaptação do modelo de recuperação de imagens por conteúdo proposto recentemente para realização de realimentação de relevância usando regiões de imagem; adaptação do modelo de realimentação de relevância para o reconhecimento de regiões em ISRs; implementação parcial de um sistema de reconhecimento semi-automático e vetorização de regiões em ISRs; proposta de metodologia de validação do sistema desenvolvido.
|
Macário, Carla Geovana N.;
Medeiros, Claudia Bauzer
Specification of a framework for semantic annotation of geospatial data on the web (article)
New York, NY, USA,
SIGSPATIAL Special,
1,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
)
@article{Macario2009b,
abstract = {The Web is a huge repository of geospatial information (GI), distributed all over the world. Efficient retrieval of this information is a key factor in planning and decision-making in a variety of domains. However, the proposed standards and specifications for data annotation and exchanging enable only syntactic interoperability. Semantic heterogeneity still presents challenges for GI retrieval. One possible approach to tackle these problems is to elicit knowledge by means of semantic annotations, based on multiple ontologies. This work describes a framework to support management of semantic annotations for digital content on the Web, for agricultural planning and monitoring. This will help end-users (agronomers, farmers, Earth scientists) to work cooperatively in developing integrated practices for land management. Content to be annotated in this context includes, for instance, satellite images, sensor data temporal series (e.g., from ground sensors or weather stations), and all kinds of textual data files.},
address = {New York, NY, USA},
author = {Carla Geovana N. Macário and Claudia Bauzer Medeiros},
date = {2009-01-01},
doi = {http://doi.acm.org/10.1145/1517463.1517466},
issn = {1946-7729},
journal = {SIGSPATIAL Special},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p27-macario.pdf},
number = {1},
pages = {27-32},
title = {Specification of a framework for semantic annotation of geospatial data on the web},
volume = {1},
year = {2009}
}
The Web is a huge repository of geospatial information (GI), distributed all over the world. Efficient retrieval of this information is a key factor in planning and decision-making in a variety of domains. However, the proposed standards and specifications for data annotation and exchanging enable only syntactic interoperability. Semantic heterogeneity still presents challenges for GI retrieval. One possible approach to tackle these problems is to elicit knowledge by means of semantic annotations, based on multiple ontologies. This work describes a framework to support management of semantic annotations for digital content on the Web, for agricultural planning and monitoring. This will help end-users (agronomers, farmers, Earth scientists) to work cooperatively in developing integrated practices for land management. Content to be annotated in this context includes, for instance, satellite images, sensor data temporal series (e.g., from ground sensors or weather stations), and all kinds of textual data files.
|
Jr, Gilberto Zonta Pastorello;
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
A standards-based framework to foster geospatial data and process interoperability (article)
Journal of the Brazilian Computer Society,
1,
2009.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Jr2009,
abstract = {The quest for interoperability is one of the main driving forces behind international organizations such as OGC and W3C. In parallel, a trend in systems design and development is to break down GIS functionalities into modules that can be composed in an ad hoc manner. This component-driven approach increases flexibility and extensibility. For scientists whose research involves geospatial analysis, however, such initiatives mean more than interoperability and flexibility. These efforts are progressively shielding these users from having to deal with problems such as data representation formats, communication protocols or pre-processing algorithms. Once scientists are allowed to abstract from lower level concerns, they can shift their focus to the design and implementation of the computational models they are interested in. This paper analyzes how interoperability and componentization efforts have this underestimated impact on the design and development perspective. This discussion is illustrated by the description of the design and implementation of WebMAPS, a geospatial information system to support agricultural planning and monitoring. By taking advantage of new results in the above areas, the experience with WebMAPS presents a road map to leverage system design and development by the seamless composition of distributed data sources and processing solutions.},
author = {Gilberto Zonta Pastorello Jr and Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
date = {2009-01-01},
journal = {Journal of the Brazilian Computer Society},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr-senra-medeiros_JBCS.pdf},
number = {1},
pages = {13-26},
title = {A standards-based framework to foster geospatial data and process interoperability},
volume = {15},
year = {2009}
}
The quest for interoperability is one of the main driving forces behind international organizations such as OGC and W3C. In parallel, a trend in systems design and development is to break down GIS functionalities into modules that can be composed in an ad hoc manner. This component-driven approach increases flexibility and extensibility. For scientists whose research involves geospatial analysis, however, such initiatives mean more than interoperability and flexibility. These efforts are progressively shielding these users from having to deal with problems such as data representation formats, communication protocols or pre-processing algorithms. Once scientists are allowed to abstract from lower level concerns, they can shift their focus to the design and implementation of the computational models they are interested in. This paper analyzes how interoperability and componentization efforts have this underestimated impact on the design and development perspective. This discussion is illustrated by the description of the design and implementation of WebMAPS, a geospatial information system to support agricultural planning and monitoring. By taking advantage of new results in the above areas, the experience with WebMAPS presents a road map to leverage system design and development by the seamless composition of distributed data sources and processing solutions.
|
2008 |
Jr, Gilberto Zonta Pastorello;
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Multimedia Semantic Annotation Propagation (conference)
Proceedings of the 1st IEEE International Workshop on Data Semantics for Multimedia Systems and Applications (DSMSA) -- 10th IEEE International Symposium on Multimedia (ISM),
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2008b,
abstract = {Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Annotations to the data helps associating context and enhances content management, making it easier to interpret and share data. However, raw data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original annotations from the production phase are often discarded or ignored, since their usefulness is usually limited to the first transformation step. New annotations must be associated with the final product, a time consuming task often carried out manually. Systematically associating new annotations to the result of each data transformation step is known as {\em annotation propagation}. This paper introduces techniques for structuring annotations by applying references to ontologies and automatically transforming these annotations along with data transformation processes. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations; and (ii) a set of transformations rules, based on ontological relations.},
author = {Gilberto Zonta Pastorello Jr and Jaudete Daltio and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 1st IEEE International Workshop on Data Semantics for Multimedia Systems and Applications (DSMSA) -- 10th IEEE International Symposium on Multimedia (ISM)},
date = {2008-12-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/PastorelloJretal-AnnotationPropagation.pdf},
title = {Multimedia Semantic Annotation Propagation},
year = {2008}
}
Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Annotations to the data helps associating context and enhances content management, making it easier to interpret and share data. However, raw data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original annotations from the production phase are often discarded or ignored, since their usefulness is usually limited to the first transformation step. New annotations must be associated with the final product, a time consuming task often carried out manually. Systematically associating new annotations to the result of each data transformation step is known as {m annotation propagation}. This paper introduces techniques for structuring annotations by applying references to ontologies and automatically transforming these annotations along with data transformation processes. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations; and (ii) a set of transformations rules, based on ontological relations.
|
Jr, Gilberto Zonta Pastorello
Managing the lifecycle of sensor data: from production to consumption (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Jr2008,
abstract = {Sensing devices are becoming widely disseminated, being applied in several domains, noticeably in scientific research. However, the increase in their number and variety introduces problems on managing the produced data, such as how to provide sensor data at distinct rates or temporal resolutions for different applications, or how to pre-process or format the data differently for each request. This work is concerned with tackling four issues that arise in the management of sensor data for scientific applications: (i) providing homogeneous access to heterogeneous sensing devices and their data; (ii) managing the composition of operations applied to sensor data; (iii) offering flexible data pre-processing facilities prior to sensor data publication; and, (iv) propagating and creating valid data annotations (metadata) throughout the data life cycle. The proposed solution to issue (i) is to uniformly encapsulate both software and data by extending a component technology called Digital Content Components (DCCs), also allowing associated annotations. Using these components as a basis, the proposed solution to (ii) is to apply scientific workflows to coordinate the combination of data and software DCCs. The solution proposed to (iii) involves invoking and posting workflow specifications from the data provider as well as using the annotations on DCCs to enrich the queries and answers. Finally, an annotation propagation mechanism is proposed as a solution to (iv). Our contributions arepresented within a framework for sensor data management, which unifies aspects of data access, pre-processing, publication and annotation.},
author = {Gilberto Zonta Pastorello Jr},
date = {2008-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tese_GZPastorelloJr.pdf},
school = {Instituto de Computação - Unicamp},
title = {Managing the lifecycle of sensor data: from production to consumption},
year = {2008}
}
Sensing devices are becoming widely disseminated, being applied in several domains, noticeably in scientific research. However, the increase in their number and variety introduces problems on managing the produced data, such as how to provide sensor data at distinct rates or temporal resolutions for different applications, or how to pre-process or format the data differently for each request. This work is concerned with tackling four issues that arise in the management of sensor data for scientific applications: (i) providing homogeneous access to heterogeneous sensing devices and their data; (ii) managing the composition of operations applied to sensor data; (iii) offering flexible data pre-processing facilities prior to sensor data publication; and, (iv) propagating and creating valid data annotations (metadata) throughout the data life cycle. The proposed solution to issue (i) is to uniformly encapsulate both software and data by extending a component technology called Digital Content Components (DCCs), also allowing associated annotations. Using these components as a basis, the proposed solution to (ii) is to apply scientific workflows to coordinate the combination of data and software DCCs. The solution proposed to (iii) involves invoking and posting workflow specifications from the data provider as well as using the annotations on DCCs to enrich the queries and answers. Finally, an annotation propagation mechanism is proposed as a solution to (iv). Our contributions arepresented within a framework for sensor data management, which unifies aspects of data access, pre-processing, publication and annotation.
|
Macario, C. G. N;
Medeiros, C. B.
Specification of a Framework for Semantic Annotation of Geospatial Data on the Web (conference)
Digital Proceedings of 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems - Ph.D. Dissertation Showcase,
ACM Press,
New York,
2008.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2008b,
address = {New York},
author = {C. G. N Macario and C. B. Medeiros},
booktitle = {Digital Proceedings of 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems - Ph.D. Dissertation Showcase},
date = {2008-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/acmgis.pdf},
publisher = {ACM Press},
title = {Specification of a Framework for Semantic Annotation of Geospatial Data on the Web},
year = {2008}
}
|
Jr, Gilberto Zonta Pastorello;
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer
Bridging the gap between geospatial resource providers and model developers (conference)
Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM-GIS),
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2008b,
abstract = {This paper analyzes how interoperability and componentization efforts in the geospatial domain have an underestimated impact on the user perspective, directly affecting model development. This discussion is illustrated by the description of the design and implementation of WebMAPS, a geospatial information system to support agricultural planning and monitoring.},
author = {Gilberto Zonta Pastorello Jr and Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM-GIS)},
date = {2008-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr-senra-medeiros_acmgis2008.pdf},
title = {Bridging the gap between geospatial resource providers and model developers},
year = {2008}
}
This paper analyzes how interoperability and componentization efforts in the geospatial domain have an underestimated impact on the user perspective, directly affecting model development. This discussion is illustrated by the description of the design and implementation of WebMAPS, a geospatial information system to support agricultural planning and monitoring.
|
Figueiredo, Maurício Augusto;
Medeiros, Claudia Bauzer
Gerenciamento de Regras de Qualidade em Cadeias Produtivas Agrícolas (conference)
VII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados,
SBBD 2008 – XXIII Simpósio Brasileiro de Bancos de Dados,
Campinas, SP - Brasil,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{eMedeiros2008b,
abstract = {O uso de sistemas computacionais para a obtenção, armazenamento, processamento e análise de informações provenientes de fluxos de cadeias produtivas tornou-se um importante tema de pesquisa. Além dos desafios científicos de natureza multidisciplinar, tem várias conseqüências econômicas. O objetivo desta dissertação é tratar de mecanismos de gerenciamento de regras de qualidade aplicadas a uma cadeia produtiva agrícola, sob dois aspectos: (i) a especificação e armazenamento dessas regras e (ii) a análise dos eventos ocorridos na cadeia face a tais restrições. Esta dissertação tem como ponto de partida um modelo de rastreabilidade para cadeias produtivas em agropecuária desenvolvido na UNICAMP. A pesquisa combina trabalhos em bancos de dados ativos, serviços Web, disseminação de eventos, e usa dados de redes de sensores. Dentre as contribuições esperadas estão: um estudo detalhado referente à rastreabilidade associada a regras de qualidade, um modelo capaz de gerenciar a especificação, aplicação e análise dessas regras e um protótipo para validação do trabalho. Os estudos de caso são baseados em problemas na área de agricultura, tendo em vista os atuais projetos em andamento no IC - UNICAMP.},
address = {Campinas, SP - Brasil},
author = {Maurício Augusto Figueiredo and Claudia Bauzer Medeiros},
booktitle = {VII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados},
date = {2008-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/wtdbd-Mauricio.pdf},
pages = {1-6},
publisher = {SBBD 2008 – XXIII Simpósio Brasileiro de Bancos de Dados},
title = {Gerenciamento de Regras de Qualidade em Cadeias Produtivas Agrícolas},
year = {2008}
}
O uso de sistemas computacionais para a obtenção, armazenamento, processamento e análise de informações provenientes de fluxos de cadeias produtivas tornou-se um importante tema de pesquisa. Além dos desafios científicos de natureza multidisciplinar, tem várias conseqüências econômicas. O objetivo desta dissertação é tratar de mecanismos de gerenciamento de regras de qualidade aplicadas a uma cadeia produtiva agrícola, sob dois aspectos: (i) a especificação e armazenamento dessas regras e (ii) a análise dos eventos ocorridos na cadeia face a tais restrições. Esta dissertação tem como ponto de partida um modelo de rastreabilidade para cadeias produtivas em agropecuária desenvolvido na UNICAMP. A pesquisa combina trabalhos em bancos de dados ativos, serviços Web, disseminação de eventos, e usa dados de redes de sensores. Dentre as contribuições esperadas estão: um estudo detalhado referente à rastreabilidade associada a regras de qualidade, um modelo capaz de gerenciar a especificação, aplicação e análise dessas regras e um protótipo para validação do trabalho. Os estudos de caso são baseados em problemas na área de agricultura, tendo em vista os atuais projetos em andamento no IC - UNICAMP.
|
Vilar, Bruno S. C. M.;
Medeiros, Claudia M. Bauzer
Processamento Semântico de Consultas para Sistemas de Biodiversidade (conference)
VII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados,
Campinas, SP,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{eMedeiros2008,
abstract = {Sistemas de informação de biodiversidade lidam com um conjunto heterogêneo de informações providas por grupos de pesquisa, como espécies estudadas, estruturação das informações e locais de estudo. Esta heterogeneidade de dados, usuários e procedimentos dificulta o reuso e o compartilhamento de informações. O objetivo deste trabalho é melhorar o processo de consulta às informações em sistemas de biodiversidade. Para tanto, e proposto um módulo que pré-processa uma consulta de usuário (cientista) agregando informações, provenientes de ontologias, para desambigüizar a consulta. O trabalho pressupõe que os dados a serem consultados estão distribuídos em repositórios na Web, os quais são mantidos por grupos de cientistas e têm seus conteúdos acessíveis por servicos Web.},
address = {Campinas, SP},
author = {Bruno S. C. M. Vilar and Claudia M. Bauzer Medeiros},
booktitle = {VII WTDBD - Workshop de Teses e Dissertações em Bancos de Dados},
date = {2008-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WTDBD-Bruno-2008.pdf},
pages = {37-42},
title = {Processamento Semântico de Consultas para Sistemas de Biodiversidade},
year = {2008}
}
Sistemas de informação de biodiversidade lidam com um conjunto heterogêneo de informações providas por grupos de pesquisa, como espécies estudadas, estruturação das informações e locais de estudo. Esta heterogeneidade de dados, usuários e procedimentos dificulta o reuso e o compartilhamento de informações. O objetivo deste trabalho é melhorar o processo de consulta às informações em sistemas de biodiversidade. Para tanto, e proposto um módulo que pré-processa uma consulta de usuário (cientista) agregando informações, provenientes de ontologias, para desambigüizar a consulta. O trabalho pressupõe que os dados a serem consultados estão distribuídos em repositórios na Web, os quais são mantidos por grupos de cientistas e têm seus conteúdos acessíveis por servicos Web.
|
Santos, Jefersson Alex dos;
Ferreira, Cristiano Dalmaschio;
Torres, Ricardo da Silva
A genetic programming approach for relevance feedback in region-based image retrieval systems. (conference)
Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI),
IEEE Computer Society,
Los Alamitos,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{dosSantos2008,
abstract = {This paper presents a new relevance feedback method for content-based image retrieval using local image features. This method adopts a genetic programming approach to learn user preferences and combine the region similarity values in a query session. Experiments demonstrate that the proposed method yields more effective results than the Local Aggregation Pattern (LAP)-based relevance feedback technique.},
address = {Los Alamitos},
author = {Jefersson Alex dos Santos and Cristiano Dalmaschio Ferreira and Ricardo da Silva Torres},
booktitle = {Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI)},
date = {2008-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/santos-GPImageRetrieval.pdf},
publisher = {IEEE Computer Society},
title = {A genetic programming approach for relevance feedback in region-based image retrieval systems.},
year = {2008}
}
This paper presents a new relevance feedback method for content-based image retrieval using local image features. This method adopts a genetic programming approach to learn user preferences and combine the region similarity values in a query session. Experiments demonstrate that the proposed method yields more effective results than the Local Aggregation Pattern (LAP)-based relevance feedback technique.
|
Penatti, Otávio Augusto Bizetto;
Torres, Ricardo da Silva
Color descriptors for Web image retrieval: a comparative study (conference)
Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI),
IEEE Computer Society,
Los Alamitos,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Penatti2008,
abstract = {This paper presents a comparative study of color descriptors for content-based image retrieval on the Web. Several image descriptors were compared theoretically and the most relevant ones were implemented and tested in two different databases. The main goal was to find out the best descriptors for Web image retrieval. Descriptors are compared according to the extraction and distance functions complexities, the compactness of feature vectors, and the ability to retrieve relevant images.},
address = {Los Alamitos},
author = {Otávio Augusto Bizetto Penatti and Ricardo da Silva Torres},
booktitle = {Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI)},
date = {2008-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/penatti-ColorDescriptorsWeb.pdf},
publisher = {IEEE Computer Society},
title = {Color descriptors for Web image retrieval: a comparative study},
year = {2008}
}
This paper presents a comparative study of color descriptors for content-based image retrieval on the Web. Several image descriptors were compared theoretically and the most relevant ones were implemented and tested in two different databases. The main goal was to find out the best descriptors for Web image retrieval. Descriptors are compared according to the extraction and distance functions complexities, the compactness of feature vectors, and the ability to retrieve relevant images.
|
Macário, Carla G. N.;
Medeiros, Claudia B.
Specification of a Framework for Semantic Annotation of Geospatial Data on the Web (conference)
VII Workshop de Teses e Dissertações em Bancos de Dados (SBBD 2008),
Campinas, SP,
2008.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2008,
address = {Campinas, SP},
author = {Carla G. N. Macário and Claudia B. Medeiros},
booktitle = {VII Workshop de Teses e Dissertações em Bancos de Dados (SBBD 2008)},
date = {2008-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/A01-45190.pdf},
pages = {1-8},
title = {Specification of a Framework for Semantic Annotation of Geospatial Data on the Web},
year = {2008}
}
|
Ferreira, Cristiano D.;
Torres, Ricardo da S.;
Gonçalves, Marcos A.;
Fan, Weiguo
Image Retrieval with Relevance Feedback based on Genetic Programming (conference)
XXIII Simpósio Brasileiro de Banco de Dados,
2008.
(
BibTeX |
Tags:
Conference
)
@conference{Ferreira2008,
author = {Cristiano D. Ferreira and Ricardo da S. Torres and Marcos A. Gonçalves and Weiguo Fan},
booktitle = {XXIII Simpósio Brasileiro de Banco de Dados},
date = {2008-10-01},
keyword = {Conference},
title = {Image Retrieval with Relevance Feedback based on Genetic Programming},
year = {2008}
}
|
Santos, Jefersson Alex dos;
Ferreira, Cristiano Dalmaschio;
Torres, Ricardo da Silva
A genetic programming approach for relevance feedback in region-based image retrieval systems. (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
08-19,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{dosSantos2008b,
abstract = {This paper presents a new relevance feedback method for content-based image retrieval using local image features. This method adopts a genetic programming approach to learn user preferences and combine the region similarity values in a query session. Experiments demonstrate that the proposed method yields more effective results than the Local Aggregation Pattern (LAP)-based relevance feedback technique.},
author = {Jefersson Alex dos Santos and Cristiano Dalmaschio Ferreira and Ricardo da Silva Torres},
date = {2008-08-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/08-19.ps.gz},
number = {08-19},
title = {A genetic programming approach for relevance feedback in region-based image retrieval systems.},
type = {Technical Report},
year = {2008}
}
This paper presents a new relevance feedback method for content-based image retrieval using local image features. This method adopts a genetic programming approach to learn user preferences and combine the region similarity values in a query session. Experiments demonstrate that the proposed method yields more effective results than the Local Aggregation Pattern (LAP)-based relevance feedback technique.
|
Jr, Gilberto Zonta Pastorello;
Daltio, Jaudete;
Medeiros, Claudia Bauzer
An Annotation Propagation Mechanism for Multimedia Content (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-08-17,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Jr2008b,
abstract = {Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Metadata -- data about data -- is the primary mechanism through which context is associated to content to enhance content management. It also makes it easier to interpret and share data and helps digital curation. However, raw data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original metadata from the production phase is often discarded or ignored, since its usefulness is usually limited to the first transformation step. New metadata must be associated with the final product, a time consuming task often carried out manually. Systematically associating new metadata to the result of each data transformation step is know as metadata evolution or annotation propagation. This paper introduces techniques for semantically enhancing metadata and automatically transforming them along with the data transformation processes. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations, which are metadata structures enriched with domain ontologies; (ii) a set of transformations rules, based on ontological relations; and, (iii) workflows, which steer the sequence of transformations.},
author = {Gilberto Zonta Pastorello Jr and Jaudete Daltio and Claudia Bauzer Medeiros},
date = {2008-08-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/08-17.ps.gz},
number = {IC-08-17},
title = {An Annotation Propagation Mechanism for Multimedia Content},
type = {Technical Report},
year = {2008}
}
Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Metadata -- data about data -- is the primary mechanism through which context is associated to content to enhance content management. It also makes it easier to interpret and share data and helps digital curation. However, raw data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original metadata from the production phase is often discarded or ignored, since its usefulness is usually limited to the first transformation step. New metadata must be associated with the final product, a time consuming task often carried out manually. Systematically associating new metadata to the result of each data transformation step is know as metadata evolution or annotation propagation. This paper introduces techniques for semantically enhancing metadata and automatically transforming them along with the data transformation processes. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations, which are metadata structures enriched with domain ontologies; (ii) a set of transformations rules, based on ontological relations; and, (iii) workflows, which steer the sequence of transformations.
|
Torres, Ricardo da S.;
Zegarra, Javier A. M.;
Santos, Jefersson A. dos;
Ferreira, Cristiano D.;
Penatti, Otávio A. B.;
Andaló, Fernanda;
Almeida, Jurandy
Recuperação de Imagens: Desafios e Novos Rumos (conference)
Seminário Integrado de Software e Hardware (SEMISH),
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{daTorres2008,
abstract = {Huge image collections have been created, managed and stored into image databases. Given the large size of these collections it is essential to provide efficient and effective mechanisms to retrieve images. This is the objective of the so-called content-based image retrieval – CBIR – systems. Traditionally, these systems are based on objective criteria to represent and compare images. However, users of CBIR systems tend to use subjective elements to compare images. The use of these elements have improved the effectiveness of contentbased image retrieval systems. This paper discusses approaches that incorporate semantic information into content-based image retrieval process, highlighting some new challenges on this area.},
author = {Ricardo da S. Torres and Javier A. M. Zegarra and Jefersson A. dos Santos and Cristiano D. Ferreira and Otávio A. B. Penatti and Fernanda Andaló and Jurandy Almeida},
booktitle = {Seminário Integrado de Software e Hardware (SEMISH)},
date = {2008-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/artigo_semish.pdf},
title = {Recuperação de Imagens: Desafios e Novos Rumos},
year = {2008}
}
Huge image collections have been created, managed and stored into image databases. Given the large size of these collections it is essential to provide efficient and effective mechanisms to retrieve images. This is the objective of the so-called content-based image retrieval – CBIR – systems. Traditionally, these systems are based on objective criteria to represent and compare images. However, users of CBIR systems tend to use subjective elements to compare images. The use of these elements have improved the effectiveness of contentbased image retrieval systems. This paper discusses approaches that incorporate semantic information into content-based image retrieval process, highlighting some new challenges on this area.
|
Jr, Gilberto Zonta Pastorello;
Medeiros, Claudia Bauzer;
Santanchè, André
Accessing and Processing Sensing Data (conference)
Proceedings of the IEEE 11th International Conference on Computational Science and Engineering (CSE),
IEEE Computer Society,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2008b,
abstract = {Scientific models are increasingly dependent on processing large volumes of streamed sensing data from a wide range of sensors, from ground based to satellite embarked infrareds. The proliferation, variety and ubiquity of those devices have added new dimensions to the problem of data handling in computational models. This raises several issues, one of which -- providing means to access and process these data -- is tackled by this paper. Our solution involves the design and implementation of a framework for sensor data management, which relies on a specific component technology -- DCC. DCCs homogeneously encapsulate individual sensors, sensor networks and sensor data archival files. They also implement facilities for controlling data production, integration and publication. As a result, developers need not concern themselves with sensor particularities, dealing instead with uniform interfaces to access data, regardless of the nature of the data providers.},
author = {Gilberto Zonta Pastorello Jr and Claudia Bauzer Medeiros and André Santanchè},
booktitle = {Proceedings of the IEEE 11th International Conference on Computational Science and Engineering (CSE)},
date = {2008-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr_medeiros_santanche_cse2008.pdf},
publisher = {IEEE Computer Society},
title = {Accessing and Processing Sensing Data},
year = {2008}
}
Scientific models are increasingly dependent on processing large volumes of streamed sensing data from a wide range of sensors, from ground based to satellite embarked infrareds. The proliferation, variety and ubiquity of those devices have added new dimensions to the problem of data handling in computational models. This raises several issues, one of which -- providing means to access and process these data -- is tackled by this paper. Our solution involves the design and implementation of a framework for sensor data management, which relies on a specific component technology -- DCC. DCCs homogeneously encapsulate individual sensors, sensor networks and sensor data archival files. They also implement facilities for controlling data production, integration and publication. As a result, developers need not concern themselves with sensor particularities, dealing instead with uniform interfaces to access data, regardless of the nature of the data providers.
|
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Aondê: Um Serviço Web de Ontologias para Interoperabilidade em Sistemas de Biodiversidade (Aondê: An Ontology Web Service for Interoperability across Biodiversity Information Systems) (conference)
First place - Dissertation Competition - XXVIII Conference of the Brazilian Computer Society,
Belém, Brazil,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Daltio2008b,
abstract = {Research in biodiversity associates data on living beings and their habitats, constructing sophisticated models and correlating several kinds of heterogeneous data. Such data are provided by research groups with different vocabularies, methodologies and goals, which hampers their cooperation. Ontologies are being proposed as one of the means to solve heterogeneity problems. However, this gives birth to new challenges to manage and share ontologies. This dissertation specified and developed a new kind of Web Service, whose goal is to contribute to solve such problems. The service supports a wide range of operations on ontologies, and was implemented and validated with real case studies in biodiversity, for large ontologies. The dissertation is available on UNICAMP digital library.},
address = {Belém, Brazil},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
booktitle = {First place - Dissertation Competition - XXVIII Conference of the Brazilian Computer Society},
date = {2008-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CTD-publicado.pdf},
pages = {49-56},
title = {Aondê: Um Serviço Web de Ontologias para Interoperabilidade em Sistemas de Biodiversidade (Aondê: An Ontology Web Service for Interoperability across Biodiversity Information Systems)},
year = {2008}
}
Research in biodiversity associates data on living beings and their habitats, constructing sophisticated models and correlating several kinds of heterogeneous data. Such data are provided by research groups with different vocabularies, methodologies and goals, which hampers their cooperation. Ontologies are being proposed as one of the means to solve heterogeneity problems. However, this gives birth to new challenges to manage and share ontologies. This dissertation specified and developed a new kind of Web Service, whose goal is to contribute to solve such problems. The service supports a wide range of operations on ontologies, and was implemented and validated with real case studies in biodiversity, for large ontologies. The dissertation is available on UNICAMP digital library.
|
Medeiros, Claudia Bauzer
Grand Research Challenges in Computer Science in Brazil (article)
IEEE Computer,
6,
2008.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Medeiros2008,
abstract = {In May 2006, the Brazilian Computer Society proposed five Grand Research Challenges in Computer Science in Brazil. The societyś goal was to foster long-term planning and research in computer science, enhance cooperation with other scientific domains, and provide input to public R&D policymakers in Brazil. This paper presents the five challenges under a global perspective, showing how they can benefit from cooperation with other research fields, and discussing CS research trends in Brazil. The paper also discusses how the challenges were elicited, and future directions.},
author = {Claudia Bauzer Medeiros},
date = {2008-06-01},
journal = {IEEE Computer},
keyword = {Article},
note = {http://doi.ieeecomputersociety.org/10.1109/MC.2008.188},
number = {6},
pages = {59-66},
title = {Grand Research Challenges in Computer Science in Brazil},
volume = {14},
year = {2008}
}
In May 2006, the Brazilian Computer Society proposed five Grand Research Challenges in Computer Science in Brazil. The societyś goal was to foster long-term planning and research in computer science, enhance cooperation with other scientific domains, and provide input to public R&D policymakers in Brazil. This paper presents the five challenges under a global perspective, showing how they can benefit from cooperation with other research fields, and discussing CS research trends in Brazil. The paper also discusses how the challenges were elicited, and future directions.
|
Rothenberg, Christian Esteve;
Figueiredo, Maurício Augusto
Um controlador de recursos para redes de próxima geração (conference)
I Congresso Tecnológico InfoBrasil,
Fortaleza, CE - Brasil,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{eFigueiredo2008,
abstract = {Com a evolução das redes para o modelo de nova geração (NGN - Next Generation Network), onde ha a separação das camadas de transporte, controle e aplicação, torna-se chave um subsistema que habilite e arbitre as requisições dos aplicativos e controladores de sessão em função dos recursos de Qualidade de Serviço (largura de banda, prioridade, etc.) disponíveis nos diferentes segmentos da rede de transporte. Este trabalho introduz as funcionalidades definidas pelos padrões internacionais das NGNs, propondo uma arquitetura de implementação que e objeto de pesquisa aplicada para explorar tecnologias e conceber estudos para o desenvolvimento de soluções que propiciem um gerenciamento integrado dos recursos de redes heterogêneas e atendam as necessidades particulares do mercado brasileiro de telecomunicações.},
address = {Fortaleza, CE - Brasil},
author = {Christian Esteve Rothenberg and Maurício Augusto Figueiredo},
booktitle = {I Congresso Tecnológico InfoBrasil},
date = {2008-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/artigo-ngnrc.pdf},
pages = {1-6},
title = {Um controlador de recursos para redes de próxima geração},
year = {2008}
}
Com a evolução das redes para o modelo de nova geração (NGN - Next Generation Network), onde ha a separação das camadas de transporte, controle e aplicação, torna-se chave um subsistema que habilite e arbitre as requisições dos aplicativos e controladores de sessão em função dos recursos de Qualidade de Serviço (largura de banda, prioridade, etc.) disponíveis nos diferentes segmentos da rede de transporte. Este trabalho introduz as funcionalidades definidas pelos padrões internacionais das NGNs, propondo uma arquitetura de implementação que e objeto de pesquisa aplicada para explorar tecnologias e conceber estudos para o desenvolvimento de soluções que propiciem um gerenciamento integrado dos recursos de redes heterogêneas e atendam as necessidades particulares do mercado brasileiro de telecomunicações.
|
Nakai, Alan Massaru;
Macário, Carla Geovana;
Madeira, Edmundo;
Medeiros, Claudia Bauzer
An Infrastructure for Sharing and Executing Choreographies (conference)
Proceedings of the 4th International Conference of Web Informations Systems and Technologies (WEBIST),
INSTICC,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Nakai2008,
abstract = {The main attractiveness of Web services is their capacity to provide interoperability among heterogeneous distributed systems. Increasingly, companies and organizations have adopted Web services as a way to interoperate with their business partners. In such a scenario, Web services choreography can be applied in the specification of interorganizational business processes. However, the dynamic nature of business partnerships requires mechanisms for agile designing and deploying choreographies. In this paper, we present an infrastructure that aims to address the above concern. Our approach aims to reach flexibility by providing mechanisms for sharing, finding and executing choreographies in a friendly manner for the user. We also present a prototype implementation.},
author = {Alan Massaru Nakai and Carla Geovana Macário and Edmundo Madeira and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 4th International Conference of Web Informations Systems and Technologies (WEBIST)},
date = {2008-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/nakai-webist2008-cr.pdf},
pages = {455-460},
publisher = {INSTICC},
title = {An Infrastructure for Sharing and Executing Choreographies},
year = {2008}
}
The main attractiveness of Web services is their capacity to provide interoperability among heterogeneous distributed systems. Increasingly, companies and organizations have adopted Web services as a way to interoperate with their business partners. In such a scenario, Web services choreography can be applied in the specification of interorganizational business processes. However, the dynamic nature of business partnerships requires mechanisms for agile designing and deploying choreographies. In this paper, we present an infrastructure that aims to address the above concern. Our approach aims to reach flexibility by providing mechanisms for sharing, finding and executing choreographies in a friendly manner for the user. We also present a prototype implementation.
|
Jr, Gilberto Zonta Pastorello;
Jr, Luiz Celso Gomes;
Medeiros, Claudia Bauzer;
Santanchè, André
Sensor Data Publication on the Web for Scientific Applications (conference)
Proceedings of the 4th International Conference on Web Information Systems and Technologies (WEBIST),
INSTICC,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2008b,
abstract = {This paper considers the problems of sensor data publication, taking advantage of research on components and Web service standards. Sensor data is widely used in scientific experiments -- e.g., for model validation, environment monitoring, and calibrating running applications. Heterogeneity in sensing devices hamper effective use of their data, requiring new solutions for publication mechanisms. Our solution is based on applying a specific component technology, Digital Content Component (DCC), which is capable of uniformly encapsulating data and software. Sensor data publication is tackled by extending CPTs to comply with geospatial standards for Web services from OGC (Open Geospatial Consortium). Using this approach, Web services can be implemented by CPTs, with publication of sensor data following standards. Furthermore, this solution allows client applications to request the execution of pre-processing functions before data is published. The approach enables scientists to share, find, process and access geospatial sensor data in a flexible and homogeneous manner.},
author = {Gilberto Zonta Pastorello Jr and Luiz Celso Gomes Jr and Claudia Bauzer Medeiros and André Santanchè},
booktitle = {Proceedings of the 4th International Conference on Web Information Systems and Technologies (WEBIST)},
date = {2008-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr_gomesjr_medeiros_santanche_webist2008.pdf},
publisher = {INSTICC},
title = {Sensor Data Publication on the Web for Scientific Applications},
year = {2008}
}
This paper considers the problems of sensor data publication, taking advantage of research on components and Web service standards. Sensor data is widely used in scientific experiments -- e.g., for model validation, environment monitoring, and calibrating running applications. Heterogeneity in sensing devices hamper effective use of their data, requiring new solutions for publication mechanisms. Our solution is based on applying a specific component technology, Digital Content Component (DCC), which is capable of uniformly encapsulating data and software. Sensor data publication is tackled by extending CPTs to comply with geospatial standards for Web services from OGC (Open Geospatial Consortium). Using this approach, Web services can be implemented by CPTs, with publication of sensor data following standards. Furthermore, this solution allows client applications to request the execution of pre-processing functions before data is published. The approach enables scientists to share, find, process and access geospatial sensor data in a flexible and homogeneous manner.
|
Joliveau, Marc;
Vuyst, Florian De;
Jomier, Genevieve;
Medeiros, Claudia Bauzer
Exploitation de données brutes de trafic routier urbain issues d’un réseau de capteurs géoréférencés (conference)
Proc. Atelier Systemes d Information en Transport,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Joliveau2008,
abstract = {Traffic data coming from sensor networks have prompted a wide range of research issues related with Transportation Information Systems. These data are usually represented by large and complex spatio-temporal series. This paper presents a new approach to manage rough data coming from static georeferenced sensors. Our work is based on combining analytic methods to process sensor data and proposing an architecture for an information system dedicated to road traffic. It is being conducted within a project which uses real data, generated by 1000 sensors, during 3 years, in a french big city.},
author = {Marc Joliveau and Florian De Vuyst and Genevieve Jomier and Claudia Bauzer Medeiros},
booktitle = {Proc. Atelier Systemes d Information en Transport},
date = {2008-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/JoliveauEtAl-CADDY.pdf},
title = {Exploitation de données brutes de trafic routier urbain issues d’un réseau de capteurs géoréférencés},
year = {2008}
}
Traffic data coming from sensor networks have prompted a wide range of research issues related with Transportation Information Systems. These data are usually represented by large and complex spatio-temporal series. This paper presents a new approach to manage rough data coming from static georeferenced sensors. Our work is based on combining analytic methods to process sensor data and proposing an architecture for an information system dedicated to road traffic. It is being conducted within a project which uses real data, generated by 1000 sensors, during 3 years, in a french big city.
|
Mariote, Leonardo Elias
Mining sensor data time series (Mineracão de séries temporais de dados de sensores) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Mariote2008,
abstract = {Sensor networks have increased the amount and variety of temporal data available. This motivated new techniques in data mining, which describe different aspects of time series. Related work addresses several issues, such as indexing and clustering time series, and the definition of more efficient feature vectores and distance functions. However, most results focus on describing the values in a series, and not their evolution. Furthermore, the majority of papers only characterizes a single series, which is not enough in cases where multiple kinds of data must be considered simultaneously. This dissertation presents a new technique, which describes time series using a distinct approach, characterizing oscillation patterns, rather than the values themselves. The new descriptor -- TIDES (Time Series Oscillation Descriptor) -- is based on approximating the series by segments, and then extracting the angular coefficients of the segments. TIDES supports multi-scale analysis, thereby allowing to compare two series according to distinct granularities, which enables a more thorough analysis. The dissertation also presents several extensions to TIDES, which enable describing multiple series at a time. This joint description is needed to correctly characterize phenomena which evolve jointly -- the so-called co-evolution. Experiments conducted with real data on temperature, for different Brazilian cities, show that TIDES successfully characterizes time series oscillation.},
author = {Leonardo Elias Mariote},
date = {2008-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao1.pdf},
school = {Instituto de Computação - Unicamp},
title = {Mining sensor data time series (Mineracão de séries temporais de dados de sensores)},
year = {2008}
}
Sensor networks have increased the amount and variety of temporal data available. This motivated new techniques in data mining, which describe different aspects of time series. Related work addresses several issues, such as indexing and clustering time series, and the definition of more efficient feature vectores and distance functions. However, most results focus on describing the values in a series, and not their evolution. Furthermore, the majority of papers only characterizes a single series, which is not enough in cases where multiple kinds of data must be considered simultaneously. This dissertation presents a new technique, which describes time series using a distinct approach, characterizing oscillation patterns, rather than the values themselves. The new descriptor -- TIDES (Time Series Oscillation Descriptor) -- is based on approximating the series by segments, and then extracting the angular coefficients of the segments. TIDES supports multi-scale analysis, thereby allowing to compare two series according to distinct granularities, which enables a more thorough analysis. The dissertation also presents several extensions to TIDES, which enable describing multiple series at a time. This joint description is needed to correctly characterize phenomena which evolve jointly -- the so-called co-evolution. Experiments conducted with real data on temperature, for different Brazilian cities, show that TIDES successfully characterizes time series oscillation.
|
Daltio, J.;
Medeiros, C. B.;
Jr, L. C. Gomes;
Lewinsohn, T.
A Framework to Process Complex Biodiversity Queries (conference)
Proc. ACM Symposium on Applied Computing (ACM SAC),
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Daltio2008b,
abstract = {Tackling biodiversity information is essentially a distributed effort. Data handled are inherently heterogeneous, being provided by distinct research groups and using different vocabularies. Queries in biodiversity systems require to correlate these data, using many kinds of knowledge on geographic, biologic and ecological issues. Available biodiversity systems can only cope with part of these queries, and end users must perform several manual tasks to derive the desired correlations, because of semantic mismatches among data sources and lack of appropriate operators. This paper presents a solution based on Web services to meet these challenges. It relies on ontologies to retrieve the query contexts and uses the terms of this context to discover suitable sources in data repositories. This approach is being tested using real data, with new services.},
author = {J. Daltio and C. B. Medeiros and L. C. Gomes Jr and T. Lewinsohn},
booktitle = {Proc. ACM Symposium on Applied Computing (ACM SAC)},
date = {2008-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SWA.pdf},
title = {A Framework to Process Complex Biodiversity Queries},
year = {2008}
}
Tackling biodiversity information is essentially a distributed effort. Data handled are inherently heterogeneous, being provided by distinct research groups and using different vocabularies. Queries in biodiversity systems require to correlate these data, using many kinds of knowledge on geographic, biologic and ecological issues. Available biodiversity systems can only cope with part of these queries, and end users must perform several manual tasks to derive the desired correlations, because of semantic mismatches among data sources and lack of appropriate operators. This paper presents a solution based on Web services to meet these challenges. It relies on ontologies to retrieve the query contexts and uses the terms of this context to discover suitable sources in data repositories. This approach is being tested using real data, with new services.
|
Zenteno, A. T.;
Cuaresma, J. M.;
Gutierrez, J.;
Martins, E.;
Torres, R. da S.;
Baranauskas, M. C.
A Development Process for Web Geographic Information System. (conference)
Proceedings of the 10th International Conference on Enterprise Information Systems, Barcelona,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Zenteno2008,
abstract = {This paper introduces a process for developing Web GIS (Geographic Information Systems) applications. This process integrates the NDT (Navigational Development Techniques) approach with some of the Organizational Semiotic models. The use of the proposed development process is illustrated for a real application: the construction of the WebMaps system. WebMaps is a Web GIS system whose main goal is to support harvest planning in Brazil.},
author = {A. T. Zenteno and J. M. Cuaresma and J. Gutierrez and E. Martins and R. da S. Torres and M. C. Baranauskas},
booktitle = {Proceedings of the 10th International Conference on Enterprise Information Systems, Barcelona},
date = {2008-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zenteno08iceis.pdf},
title = {A Development Process for Web Geographic Information System.},
year = {2008}
}
This paper introduces a process for developing Web GIS (Geographic Information Systems) applications. This process integrates the NDT (Navigational Development Techniques) approach with some of the Organizational Semiotic models. The use of the proposed development process is illustrated for a real application: the construction of the WebMaps system. WebMaps is a Web GIS system whose main goal is to support harvest planning in Brazil.
|
Zegarra, J. A. M.;
Beek-Pepper, J. C.;
Leite, N. J.;
Torres, R. da S.;
Falcão, A. X.
Combining Global with Local Texture Information for Image Retrieval Applications. (conference)
IEEE International Symposium on Multimedia, 2008, Berkeley, California, USA,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Zegarra2008,
abstract = {This paper proposes a new texture descriptor to guide the search and retrieval in image databases. It extracts rich information from global and local primitives of textured images. At a higher level, the global macro-features in textured images are characterized by exploiting the multiresolution properties of the Steerable Pyramid Decomposition. By doing this, the global texture configurations are highlighted. At a finer level, the local arrangements of texture micro-patterns are encoded by the Local Binary Pattern operator. Experiments were carried out on the standard Vistex dataset aiming to compare our descriptors against popular texture extraction methods with regard to their retrieval accuracies. The comparative evaluations allowed us to show the superior descriptive properties of our feature representation methods.},
author = {J. A. M. Zegarra and J. C. Beek-Pepper and N. J. Leite and R. da S. Torres and A. X. Falcão},
booktitle = {IEEE International Symposium on Multimedia, 2008, Berkeley, California, USA},
date = {2008-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zegarra08ism.pdf},
title = {Combining Global with Local Texture Information for Image Retrieval Applications.},
year = {2008}
}
This paper proposes a new texture descriptor to guide the search and retrieval in image databases. It extracts rich information from global and local primitives of textured images. At a higher level, the global macro-features in textured images are characterized by exploiting the multiresolution properties of the Steerable Pyramid Decomposition. By doing this, the global texture configurations are highlighted. At a finer level, the local arrangements of texture micro-patterns are encoded by the Local Binary Pattern operator. Experiments were carried out on the standard Vistex dataset aiming to compare our descriptors against popular texture extraction methods with regard to their retrieval accuracies. The comparative evaluations allowed us to show the superior descriptive properties of our feature representation methods.
|
Rocha, A.;
Jr, J. G. Almeira;
Nascimento, M.;
Torres, R. da S.;
Goldenstein, S. K.
Efficient and Flexible Cluster-and-Search for CBIR (conference)
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Rocha2008,
abstract = {CBIR is a challenging problem both in terms of effectiveness and efficiency. In this paper, we present a flexible cluster-and-search approach that is able to reuse any previously proposed image descriptor as long as a suitable similarity function is provided. In the clustering step, the image data set is clustered using a hybrid divisive-agglomerative hierarchical clustering technique. The obtained clusters are organized in a tree that can be traversed efficiently using the similarity function associated with the chosen image descriptors. Our experiments have shown that we can improve search-time performance by a factor of 10 or more, at the cost of small loss, typically less than 15%, in effectiveness whencompared to the state-of-the-art solutions.},
author = {A. Rocha and J. G. Almeira Jr and M. Nascimento and R. da S. Torres and S. K. Goldenstein},
date = {2008-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/rocha08acvis.pdf},
title = {Efficient and Flexible Cluster-and-Search for CBIR},
year = {2008}
}
CBIR is a challenging problem both in terms of effectiveness and efficiency. In this paper, we present a flexible cluster-and-search approach that is able to reuse any previously proposed image descriptor as long as a suitable similarity function is provided. In the clustering step, the image data set is clustered using a hybrid divisive-agglomerative hierarchical clustering technique. The obtained clusters are organized in a tree that can be traversed efficiently using the similarity function associated with the chosen image descriptors. Our experiments have shown that we can improve search-time performance by a factor of 10 or more, at the cost of small loss, typically less than 15%, in effectiveness whencompared to the state-of-the-art solutions.
|
Medeiros, Claudia Bauzer;
Breitman, Karin
Report on WIT08 - II Workshop on Women in Information Technology (Technical Report)
2008,
Technical Report,
WIT08 Report,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Medeiros2008b,
abstract = {WIT is an initiative of the Brazilian Computer Society (SBC) to discuss gender issues in Information Technology (IT) in Brazil – success stories, policies to foster participation, and ways and means to attract and involve the young, especially women, in IT-related careers. Organized around guest speakers and panels, the workshop concentrated on debating problems related with women´s access to IT – the job market, digital inclusion and literacy. WIT was organized by Claudia Bauzer Medeiros and Karin Breitman, both CS faculty and part of the board of SBC. This report was submitted to the ACM-W CIS newsletter of August 2008},
author = {Claudia Bauzer Medeiros and Karin Breitman},
date = {2008-01-01},
institution = {2008},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WIT08Report.pdf},
number = {WIT08 Report},
title = {Report on WIT08 - II Workshop on Women in Information Technology},
type = {Technical Report},
year = {2008}
}
WIT is an initiative of the Brazilian Computer Society (SBC) to discuss gender issues in Information Technology (IT) in Brazil – success stories, policies to foster participation, and ways and means to attract and involve the young, especially women, in IT-related careers. Organized around guest speakers and panels, the workshop concentrated on debating problems related with women´s access to IT – the job market, digital inclusion and literacy. WIT was organized by Claudia Bauzer Medeiros and Karin Breitman, both CS faculty and part of the board of SBC. This report was submitted to the ACM-W CIS newsletter of August 2008
|
Jr, J. G. Almeida;
Rocha, A.;
Torres, R. da S.;
Goldenstein, S. K.
Making Colors Worth More than a Thousand Words. (conference)
23rd Annual ACM Symposium on Applied Computing, Fortaleza, Brazil,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2008,
abstract = {Content-based image retrieval (CBIR) is a challenging task. Common techniques use only low-level features. However, these solutions can lead to the so-called ‘semantic gap’ problem: images with high feature similarities may be different in terms of user perception. In this paper, our objective is to retrieve images based on color cues which may present some affine transformations. For that, we present CSIR: a new method for comparing images based on discrete distributions of distinctive color and scale image regions. We validate the technique using images with a large range of viewpoints, partial occlusion, changes in illumination, and various domains.},
author = {J. G. Almeida Jr and A. Rocha and R. da S. Torres and S. K. Goldenstein},
booktitle = {23rd Annual ACM Symposium on Applied Computing, Fortaleza, Brazil},
date = {2008-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/almeida08acmsac.pdf},
title = {Making Colors Worth More than a Thousand Words.},
year = {2008}
}
Content-based image retrieval (CBIR) is a challenging task. Common techniques use only low-level features. However, these solutions can lead to the so-called ‘semantic gap’ problem: images with high feature similarities may be different in terms of user perception. In this paper, our objective is to retrieve images based on color cues which may present some affine transformations. For that, we present CSIR: a new method for comparing images based on discrete distributions of distinctive color and scale image regions. We validate the technique using images with a large range of viewpoints, partial occlusion, changes in illumination, and various domains.
|
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Aondê: An Ontology Web Service for Interoperability across Biodiversity Applications (article)
Information Systems,
7-8,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Daltio2008,
abstract = {Biodiversity research requires associating data about living beings and their habitats, constructing sophisticated models and correlating all kinds of information. Data handled are inherently heterogeneous, being provided by distinct (and distributed) research groups, which collect these data using different vocabularies, assumptions, methodologies and goals, and under varying spatio-temporal frames. Ontologies are being adopted as one of the means to alleviate these heterogeneity problems, thus helping cooperation among researchers. While ontology toolkits offer a wide range of operations on ontologies, they are self-contained and cannot be accessed by external applications. Thus, the many proposals for adopting ontologies to enhance interoperability in application development are either based on the use of ontology servers or of ontology frameworks. The latter support many functions, but impose application recoding whenever ontologies change, whereas the first supports ontology evolution, for a limited set of functions. This paper presents Aondê -- a Web service geared towards the biodiversity domain that combines the advantages of both frameworks and servers, supporting flexible ontology sharing and management on the Web. By clearly separating storage concerns from semantic issues, the service provides independence between ontology evolution and the applications that need them. The service provides a wide range of basic operations for creation, storage, management, analysis and integration of multiple ontologies. These operations can be repeatedly invoked by client applications to construct more complex manipulations. Aondê has been validated for real biodiversity case studies.},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
date = {2008-01-01},
journal = {Information Systems},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/InformationSystems.pdf},
number = {7-8},
pages = {724-753},
title = {Aondê: An Ontology Web Service for Interoperability across Biodiversity Applications},
volume = {33},
year = {2008}
}
Biodiversity research requires associating data about living beings and their habitats, constructing sophisticated models and correlating all kinds of information. Data handled are inherently heterogeneous, being provided by distinct (and distributed) research groups, which collect these data using different vocabularies, assumptions, methodologies and goals, and under varying spatio-temporal frames. Ontologies are being adopted as one of the means to alleviate these heterogeneity problems, thus helping cooperation among researchers. While ontology toolkits offer a wide range of operations on ontologies, they are self-contained and cannot be accessed by external applications. Thus, the many proposals for adopting ontologies to enhance interoperability in application development are either based on the use of ontology servers or of ontology frameworks. The latter support many functions, but impose application recoding whenever ontologies change, whereas the first supports ontology evolution, for a limited set of functions. This paper presents Aondê -- a Web service geared towards the biodiversity domain that combines the advantages of both frameworks and servers, supporting flexible ontology sharing and management on the Web. By clearly separating storage concerns from semantic issues, the service provides independence between ontology evolution and the applications that need them. The service provides a wide range of basic operations for creation, storage, management, analysis and integration of multiple ontologies. These operations can be repeatedly invoked by client applications to construct more complex manipulations. Aondê has been validated for real biodiversity case studies.
|
Barga, Roger S.;
Digiampietri, Luciano Antonio
Automatic capture and efficient storage of escience experiment provenance (article)
Concurrency and Computation: Practice and Experience,
5,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Barga2008,
abstract = {For the first provenance challenge, we introduce a layered model to represent workflow provenance that allows navigation from an abstract model of the experiment to instance data collected during a specificexperiment run. We outline modest extensions to a commercial workflow engine so it will automatically capture provenance at workflow runtime. We also present an approach to store this provenance data in a relational database. Finally, we demonstrate how core provenance queries in the challenge can be expressed in SQL and discuss the merits of our layered representation. Copyright © 2007 John Wiley & Sons, Ltd.},
author = {Roger S. Barga and Luciano Antonio Digiampietri},
date = {2008-01-01},
journal = {Concurrency and Computation: Practice and Experience},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/VERSAO_FINAL_CCPE1235.pdf},
number = {5},
pages = {419-429},
title = {Automatic capture and efficient storage of escience experiment provenance},
volume = {20},
year = {2008}
}
For the first provenance challenge, we introduce a layered model to represent workflow provenance that allows navigation from an abstract model of the experiment to instance data collected during a specificexperiment run. We outline modest extensions to a commercial workflow engine so it will automatically capture provenance at workflow runtime. We also present an approach to store this provenance data in a relational database. Finally, we demonstrate how core provenance queries in the challenge can be expressed in SQL and discuss the merits of our layered representation. Copyright © 2007 John Wiley & Sons, Ltd.
|
Bacarin, E.;
Madeira, E. R. M.;
Medeiros, C. B.
Contract e-Negotiation in Agricultural Supply Chains. (article)
International Journal of Electronic Commerce,
4,
2008.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Bacarin2008,
abstract = {Supply chains are composed of distributed, heterogeneous and autonomous elements, whose relationships are dynamic. Agricultural supply chains, in particular, have a number of distinguishing features - e.g., they are characterized by strict regulations to ensure safety of food products, and by the need for multi-level traceability. Contracts in such chains need sophisticated specification and management of chain agents -- their roles, rights, duties and interaction modes -- to ensure auditability. This paper proposes a framework that attacks these problems, which is centered on three main elements to support and manage agent interactions: Contracts, Coordination Plans (a special kind of business process) and Regulations (the business rules). The main contributions are: i) a contract model suitable for agricultural supply chains; ii) a negotiation protocol able to produce such contracts, which allows a wide range of negotiation styles; iii) negotiation implementation via Web services. As a consequence, we maintain independence between business processes and contract negotiation, thereby fostering interoperability among chain processes.},
author = {E. Bacarin and E. R. M. Madeira and C. B. Medeiros},
date = {2008-01-01},
journal = {International Journal of Electronic Commerce},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ijec07.pdf},
note = {Author's version},
number = {4},
pages = {71-97},
title = {Contract e-Negotiation in Agricultural Supply Chains.},
volume = {12},
year = {2008}
}
Supply chains are composed of distributed, heterogeneous and autonomous elements, whose relationships are dynamic. Agricultural supply chains, in particular, have a number of distinguishing features - e.g., they are characterized by strict regulations to ensure safety of food products, and by the need for multi-level traceability. Contracts in such chains need sophisticated specification and management of chain agents -- their roles, rights, duties and interaction modes -- to ensure auditability. This paper proposes a framework that attacks these problems, which is centered on three main elements to support and manage agent interactions: Contracts, Coordination Plans (a special kind of business process) and Regulations (the business rules). The main contributions are: i) a contract model suitable for agricultural supply chains; ii) a negotiation protocol able to produce such contracts, which allows a wide range of negotiation styles; iii) negotiation implementation via Web services. As a consequence, we maintain independence between business processes and contract negotiation, thereby fostering interoperability among chain processes.
|
2007 |
Pierre, Mateus Silva
Access Control in Multiversion Geographic Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Pierre2007,
abstract = {Geographic applications are increasingly influencing our daily activities. Their development requires efforts from multiple teams of experts with different views and authorizations to access data. As a result, several mechanisms have been proposed to control authorization in geographic databases or to provide the use of versions. These mechanisms, however, work in isolation, prioritizing only either data access or versioning systems. This dissertation addresses this issue, by proposing a unified authorization model for databases that faces both problems. The model deals with the access control issue in geographic databases, taking into account the existence of data versioning mechanisms. This model may serve as the basis for cooperative and secure work in applications that use Geographic Information Systems (GIS).},
author = {Mateus Silva Pierre},
date = {2007-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/TeseMateusSilvaPierre.pdf},
school = {Instituto de Computação - Unicamp},
title = {Access Control in Multiversion Geographic Databases},
year = {2007}
}
Geographic applications are increasingly influencing our daily activities. Their development requires efforts from multiple teams of experts with different views and authorizations to access data. As a result, several mechanisms have been proposed to control authorization in geographic databases or to provide the use of versions. These mechanisms, however, work in isolation, prioritizing only either data access or versioning systems. This dissertation addresses this issue, by proposing a unified authorization model for databases that faces both problems. The model deals with the access control issue in geographic databases, taking into account the existence of data versioning mechanisms. This model may serve as the basis for cooperative and secure work in applications that use Geographic Information Systems (GIS).
|
Macário, Carla G. N.;
Medeiros, Claudia B.;
Senra, Rodrigo D. A.
O projeto WebMAPS: desafios e resultados. (conference)
IX Brazilian Symposium on GeoInformatics - Geoinfo 2007,
INPE,
Campos do Jordão - SP,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Macario2007,
abstract = {Este trabalho descreve desafios e resultados do projeto WebMAPS, um esforço multidisciplinar envolvendo ciências agrárias e de computação, em desenvolvimento na UNICAMP. Seu objetivo é desenvolver uma plataforma baseada em serviços Web para o planejamento agro-ambiental. Requer pesquisa de ponta voltada à especificação e à implementação de software com acesso a vários tipos de informação distribuída - imagens de satélite, dados provenientes de sensores, dados de produção agrícola e dados geográficos.},
address = {Campos do Jordão - SP},
author = {Carla G. N. Macário and Claudia B. Medeiros and Rodrigo D. A. Senra},
booktitle = {IX Brazilian Symposium on GeoInformatics - Geoinfo 2007},
date = {2007-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/webmaps_geoinfo_07.pdf},
pages = {239-250},
publisher = {INPE},
title = {O projeto WebMAPS: desafios e resultados.},
year = {2007}
}
Este trabalho descreve desafios e resultados do projeto WebMAPS, um esforço multidisciplinar envolvendo ciências agrárias e de computação, em desenvolvimento na UNICAMP. Seu objetivo é desenvolver uma plataforma baseada em serviços Web para o planejamento agro-ambiental. Requer pesquisa de ponta voltada à especificação e à implementação de software com acesso a vários tipos de informação distribuída - imagens de satélite, dados provenientes de sensores, dados de produção agrícola e dados geográficos.
|
Macário, Carla Geovana do Nascimento;
Senra, Rodrigo Dias Arruda;
Medeiros, Claudia Bauzer;
Torres, Ricardo da Silva;
Lamparelli, Rubens Augusto Camargo;
Junior, Jurandir Zullo;
Rocha, Jansle Vieira;
Madeira, Edmundo Roberto Mauro;
Martins, Eliane;
Baranauskas, Maria Cecília Calani;
Leite., Neucimar Jerônimo
Monitoramento de safras via web: um caso de sucesso em pesquisa multidisciplinar (conference)
6o. Congresso Brasileiro de Agroinformática - SBIAgro 2007,
São Pedro, SP,
2007.
(
BibTeX |
Tags:
Conference
)
@conference{doMacario2007,
address = {São Pedro, SP},
author = {Carla Geovana do Nascimento Macário and Rodrigo Dias Arruda Senra and Claudia Bauzer Medeiros and Ricardo da Silva Torres and Rubens Augusto Camargo Lamparelli and Jurandir Zullo Junior and Jansle Vieira Rocha and Edmundo Roberto Mauro Madeira and Eliane Martins and Maria Cecília Calani Baranauskas and Neucimar Jerônimo Leite.},
booktitle = {6o. Congresso Brasileiro de Agroinformática - SBIAgro 2007},
date = {2007-10-01},
keyword = {Conference},
pages = {326-330},
title = {Monitoramento de safras via web: um caso de sucesso em pesquisa multidisciplinar},
year = {2007}
}
|
Zegarra, J. A. M.;
Leite, N. J.;
Torres, R. da S.
Rotation-Invariant and Scale-Invariant Steerable Pyramid Decomposition for Texture Image Retrieval (conference)
XX Brazilian Symposium on Computer Graphics and Image Processing,
2007.
(
BibTeX |
Tags:
Conference
)
@conference{Zegarra2007b,
author = {J. A. M. Zegarra and N. J. Leite and R. da S. Torres},
booktitle = {XX Brazilian Symposium on Computer Graphics and Image Processing},
date = {2007-10-01},
keyword = {Conference},
title = {Rotation-Invariant and Scale-Invariant Steerable Pyramid Decomposition for Texture Image Retrieval},
year = {2007}
}
|
Pedronetti, D. C. G.;
Torres, R. da S.
Uma plataforma de Serviços de Recomendação para Bibliotecas Digitais (conference)
VI Workshop de Teses e Dissertações em Bancos de Dados, XXII Simpósio Brasileiro de Banco de Dados,
João Pessoa,
2007.
(
BibTeX |
Tags:
Conference
)
@conference{Pedronetti2007,
address = {João Pessoa},
author = {D. C. G. Pedronetti and R. da S. Torres},
booktitle = {VI Workshop de Teses e Dissertações em Bancos de Dados, XXII Simpósio Brasileiro de Banco de Dados},
date = {2007-10-01},
keyword = {Conference},
title = {Uma plataforma de Serviços de Recomendação para Bibliotecas Digitais},
year = {2007}
}
|
Nakai, Alan Massaru;
Madeira, Edmundo
An Infrastructure to Support Choreographies in Interorganizational Business Processes (conference)
Proceedings of the I Workshop on Business Process Management (WBPM 2007),
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Nakai2007b,
abstract = {The main attractiveness of Web services is their capacity to provide interoperability among heterogeneous distributed systems. Increasingly, companies and organizations have adopted Web services as a way to interoperate with their business partners. In such a scenario, Web services choreography can be applied in the specification of interorganizational business processes. However, the dynamic nature of business partnerships requires mechanisms for agile designing and deploying of choreographies. In this paper, we present an infrastructure that aims to address the above concern. Our approach, which is based on WS-CDL, BPEL, and UDDI standards, aims to reach flexibility by providing mechanisms for sharing, finding and executing choreographies in a friendly manner for the user. We also present a prototype implementation and its application in a supply chain integration system.},
author = {Alan Massaru Nakai and Edmundo Madeira},
booktitle = {Proceedings of the I Workshop on Business Process Management (WBPM 2007)},
date = {2007-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/nakai_wbpm07.pdf},
title = {An Infrastructure to Support Choreographies in Interorganizational Business Processes},
year = {2007}
}
The main attractiveness of Web services is their capacity to provide interoperability among heterogeneous distributed systems. Increasingly, companies and organizations have adopted Web services as a way to interoperate with their business partners. In such a scenario, Web services choreography can be applied in the specification of interorganizational business processes. However, the dynamic nature of business partnerships requires mechanisms for agile designing and deploying of choreographies. In this paper, we present an infrastructure that aims to address the above concern. Our approach, which is based on WS-CDL, BPEL, and UDDI standards, aims to reach flexibility by providing mechanisms for sharing, finding and executing choreographies in a friendly manner for the user. We also present a prototype implementation and its application in a supply chain integration system.
|
Jr, Gilberto Zonta Pastorello;
Medeiros, Claudia Bauzer;
Santanchè, André
Applying Scientific Workflows to Manage Sensor Data (conference)
Proc. 1st e-Science Workshop -- XXII Brazilian Symposium on Databases,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2007b,
abstract = {There is a world wide effort to create infrastructures that support multidisciplinary, collaborative and distributed work in scientific research, giving birth to the so-called e-Science environments. At the same time, the proliferation, variety and ubiquity of sensing devices, from satellites to tiny sensors are making huge amounts of data available to scientists. This paper presents a framework with a twofold solution: (i) using a specific kind of component -- DCC -- for homogeneous sensor data acquisition; and (ii) using scientific workflows for flexible composition of sensor data and manipulation software. We present a solution for publishing sensor data tailored to distributed scientific applications.},
author = {Gilberto Zonta Pastorello Jr and Claudia Bauzer Medeiros and André Santanchè},
booktitle = {Proc. 1st e-Science Workshop -- XXII Brazilian Symposium on Databases},
date = {2007-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr-medeiros-santenche_eScience.pdf},
title = {Applying Scientific Workflows to Manage Sensor Data},
year = {2007}
}
There is a world wide effort to create infrastructures that support multidisciplinary, collaborative and distributed work in scientific research, giving birth to the so-called e-Science environments. At the same time, the proliferation, variety and ubiquity of sensing devices, from satellites to tiny sensors are making huge amounts of data available to scientists. This paper presents a framework with a twofold solution: (i) using a specific kind of component -- DCC -- for homogeneous sensor data acquisition; and (ii) using scientific workflows for flexible composition of sensor data and manipulation software. We present a solution for publishing sensor data tailored to distributed scientific applications.
|
Andaló, F. A.;
Miranda, P.;
Torres, R. da S.;
Falcão, A. X.
A New Shape Descriptor based on Tensor Scale (conference)
8th International Symposium on Mathematical Morphology,
Rio de Janeiro, Brazil,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Andalo2007b,
abstract = {Tensor scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. In this paper, we exploit this concept for binary images and propose a shape descriptor that encodes region and contour properties in a very efficient way. Experimental results are provided, showing the effectiveness of the proposed descriptor, when compared to other relevant shape descriptors, with regard to their use in content-based image retrieval systems.},
address = {Rio de Janeiro, Brazil},
author = {F. A. Andaló and P. Miranda and R. da S. Torres and A. X. Falcão},
booktitle = {8th International Symposium on Mathematical Morphology},
date = {2007-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ismm2007.pdf},
title = {A New Shape Descriptor based on Tensor Scale},
year = {2007}
}
Tensor scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. In this paper, we exploit this concept for binary images and propose a shape descriptor that encodes region and contour properties in a very efficient way. Experimental results are provided, showing the effectiveness of the proposed descriptor, when compared to other relevant shape descriptors, with regard to their use in content-based image retrieval systems.
|
Rocha, A.;
Almeida, J. G.;
Torres, R. da S.;
Goldenstein, S. K.
A New Hybrid Clustering Approach for Image Retrieval (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-07-29,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Rocha2007,
abstract = {In this paper, we present a new Hybrid Hierarchical Clustering approach for Image Retrieval. Our method combines features from both divisive and agglomerative clustering paradigms in order to yield good-quality clustering solutions with reduced computational cost. We provide several experiments showing that our technique reduces the number of required comparisons to perform a retrieval without significant loss in effectiveness when compared to flat-based solutions.},
author = {A. Rocha and J. G. Almeida and R. da S. Torres and S. K. Goldenstein},
date = {2007-09-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/rocha07tr.pdf},
number = {IC-07-29},
title = {A New Hybrid Clustering Approach for Image Retrieval},
type = {Technical Report},
year = {2007}
}
In this paper, we present a new Hybrid Hierarchical Clustering approach for Image Retrieval. Our method combines features from both divisive and agglomerative clustering paradigms in order to yield good-quality clustering solutions with reduced computational cost. We provide several experiments showing that our technique reduces the number of required comparisons to perform a retrieval without significant loss in effectiveness when compared to flat-based solutions.
|
Andaló, F. A.;
Miranda, P.;
Torres, R. da S.;
Falcão, A. X.
Detecting Contour Saliences Using Tensor Scale (conference)
IEEE International Conference on Image Processing,
San Antonio, Texas, USA,
2007.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Andalo2007b,
address = {San Antonio, Texas, USA},
author = {F. A. Andaló and P. Miranda and R. da S. Torres and A. X. Falcão},
booktitle = {IEEE International Conference on Image Processing},
date = {2007-09-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/icip2007.pdf},
title = {Detecting Contour Saliences Using Tensor Scale},
year = {2007}
}
|
Almeida, J. G.;
Rocha, A.;
Torres, R. da S.;
Goldenstein, S. K.
Image Retrieval based on Color and Scale Representative Image Regions (CSIR) (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-07-28,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Almeida2007,
abstract = {Content-based image retrieval (CBIR) is a challenging task. Common techniques use only low-level features. However, these solutions can lead to the so-called ‘semantic gap’ problem: images with high feature similarities may be different in terms of user perception. In this paper, our objective is to retrieve images based on color cues which may present some affine transformations. For that, we present CSIR: a new method for comparing images based on discrete distributions of distinctive color and scale image regions. We validate the technique using images with a large range of viewpoints, partial occlusion, changes in illumination, and various domains.},
author = {J. G. Almeida and A. Rocha and R. da S. Torres and S. K. Goldenstein},
date = {2007-09-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/almeida07tr.pdf},
number = {IC-07-28},
title = {Image Retrieval based on Color and Scale Representative Image Regions (CSIR)},
type = {Technical Report},
year = {2007}
}
Content-based image retrieval (CBIR) is a challenging task. Common techniques use only low-level features. However, these solutions can lead to the so-called ‘semantic gap’ problem: images with high feature similarities may be different in terms of user perception. In this paper, our objective is to retrieve images based on color cues which may present some affine transformations. For that, we present CSIR: a new method for comparing images based on discrete distributions of distinctive color and scale image regions. We validate the technique using images with a large range of viewpoints, partial occlusion, changes in illumination, and various domains.
|
Digiampietri, Luciano A.;
Medeiros, Claudia B.;
Setubal, João C.;
Barga, Roger S.
Traceability Mechanisms for Bioinformatics Scientific Workflows (conference)
Proceedings of the AAAI2007's Workshop on Semantic E-Science (SeS2007),
Vancouver, Canada,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2007b,
abstract = {Traceability and Provenance are often used interchangeably in eScience, being associated with the need scientists have to document their experiments, and so allow experiments to be checked and reproduced by others. These terms have, however, different meanings: provenance is more often associated with data origins, whereas traceability concerns the interlinking and execution of processes. This paper proposes a set of mechanisms to deal with this last aspect, the solution is based on database research combined with scientific workflows, plus domain-specific knowledge stored in ontology structures. This meets a need from bioinformatics laboratories, where the majority of computer systems do not support traceability facilities. These mechanisms have been implemented in a prototype, and an example using the genome assembly problem is given.},
address = {Vancouver, Canada},
author = {Luciano A. Digiampietri and Claudia B. Medeiros and João C. Setubal and Roger S. Barga},
booktitle = {Proceedings of the AAAI2007's Workshop on Semantic E-Science (SeS2007)},
date = {2007-08-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WS12DigiampietriL.pdf},
pages = {26-33},
title = {Traceability Mechanisms for Bioinformatics Scientific Workflows},
year = {2007}
}
Traceability and Provenance are often used interchangeably in eScience, being associated with the need scientists have to document their experiments, and so allow experiments to be checked and reproduced by others. These terms have, however, different meanings: provenance is more often associated with data origins, whereas traceability concerns the interlinking and execution of processes. This paper proposes a set of mechanisms to deal with this last aspect, the solution is based on database research combined with scientific workflows, plus domain-specific knowledge stored in ontology structures. This meets a need from bioinformatics laboratories, where the majority of computer systems do not support traceability facilities. These mechanisms have been implemented in a prototype, and an example using the genome assembly problem is given.
|
Digiampietri, Luciano Antonio
Management of Bioinformatics Scientific Workflows (partially in portuguese) (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Digiampietri2007,
abstract = {Bioinformatics activities are growing all over the world, following a proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures (tools and data), and how to provide interoperability among data and tools across different sites, and used for users with distinct profiles. This thesis proposes a computational infrastructure to solve these problems. The infrastructure allows to design, reuse, annotate, validate, share and document bioinformatics experiments. Scientific workflows are the mechanisms used to represent these experiments. Combining research on databases, scientific workflows, artificial intelligence and semantic Web, the infrastructure takes advantage of ontologies to support the specification and annotation of bioinformatics workflows and, to serve as basis for traceability mechanisms. Moreover, it uses artificial intelligence planning techniques to support automatic, iterative and supervised composition of tasks to satisfy the needs of the different kinds of user. The data integration and interoperability aspects are solved combining the use of ontologies, structure mapping and interface matching algorithms. The infrastructure was implemented in a prototype and validated on real bioinformatics data.},
author = {Luciano Antonio Digiampietri},
date = {2007-08-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/digiampietri_tese.pdf},
school = {Instituto de Computação - Unicamp},
title = {Management of Bioinformatics Scientific Workflows (partially in portuguese)},
year = {2007}
}
Bioinformatics activities are growing all over the world, following a proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures (tools and data), and how to provide interoperability among data and tools across different sites, and used for users with distinct profiles. This thesis proposes a computational infrastructure to solve these problems. The infrastructure allows to design, reuse, annotate, validate, share and document bioinformatics experiments. Scientific workflows are the mechanisms used to represent these experiments. Combining research on databases, scientific workflows, artificial intelligence and semantic Web, the infrastructure takes advantage of ontologies to support the specification and annotation of bioinformatics workflows and, to serve as basis for traceability mechanisms. Moreover, it uses artificial intelligence planning techniques to support automatic, iterative and supervised composition of tasks to satisfy the needs of the different kinds of user. The data integration and interoperability aspects are solved combining the use of ontologies, structure mapping and interface matching algorithms. The infrastructure was implemented in a prototype and validated on real bioinformatics data.
|
Daltio, Jaudete
Aondê: Um Serviço Web de Ontologias para Interoperabilidade em Sistemas de Biodiversidade (Aondê: An Ontology Web Service for Interoperability across Biodiversity Information Systems) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Daltio2007,
abstract = {Biodiversity research requires associating data about living beings and their habitats, constructing sophisticated models and correlating all kinds of information. Data handled are inherently heterogeneous, being provided by distinct (and distributed) research groups, which collect these data using different vocabularies, assumptions, methodologies and goals, and under varying spatio-temporal frames. This poses many kinds of challenges in Computer Science research, from the physical (e.g., diversity of storage structures) to the conceptual level (e.g., diversity of perspectives and of knowledge domains). The adoption of ontologies has been proposed as a means to help solve heterogeneity issues. However, this kind of solution gives birth to new research issues, since it implies handling problems in ontology design, management and sharing. This dissertation presents a new kind of Web Service whose goal is to help in solving such issues. Aondê (which means ̈owl̈ in Tupi, the main branch of native Brazilian languages) is a Web Service that provides a wide range of operations for storage, management, search, ranking, analysis and integration of ontologies. The text covers the specification and implementation of Aondê, which have been validated by a prototype tested with large ontologies and real biodiversity case studies.},
author = {Jaudete Daltio},
date = {2007-08-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DissertacaoJaudeteDaltio.pdf},
school = {Instituto de Computação - Unicamp},
title = {Aondê: Um Serviço Web de Ontologias para Interoperabilidade em Sistemas de Biodiversidade (Aondê: An Ontology Web Service for Interoperability across Biodiversity Information Systems)},
year = {2007}
}
Biodiversity research requires associating data about living beings and their habitats, constructing sophisticated models and correlating all kinds of information. Data handled are inherently heterogeneous, being provided by distinct (and distributed) research groups, which collect these data using different vocabularies, assumptions, methodologies and goals, and under varying spatio-temporal frames. This poses many kinds of challenges in Computer Science research, from the physical (e.g., diversity of storage structures) to the conceptual level (e.g., diversity of perspectives and of knowledge domains). The adoption of ontologies has been proposed as a means to help solve heterogeneity issues. However, this kind of solution gives birth to new research issues, since it implies handling problems in ontology design, management and sharing. This dissertation presents a new kind of Web Service whose goal is to help in solving such issues. Aondê (which means ̈owl̈ in Tupi, the main branch of native Brazilian languages) is a Web Service that provides a wide range of operations for storage, management, search, ranking, analysis and integration of ontologies. The text covers the specification and implementation of Aondê, which have been validated by a prototype tested with large ontologies and real biodiversity case studies.
|
Torres, O. B. Penatti e R. da S.
Descritor de Relacionamento Espacial baseado em Partições (conference)
XXVI Concurso de Trabalhos de Iniciação Científica, XXVII Congresso da Sociedade Brasileira de Computação,
Rio de Janeiro, Brazil,
2007.
(
BibTeX |
Tags:
Conference
)
@conference{Penatti2007,
address = {Rio de Janeiro, Brazil},
author = {O. B. Penatti e R. da S. Torres},
booktitle = {XXVI Concurso de Trabalhos de Iniciação Científica, XXVII Congresso da Sociedade Brasileira de Computação},
date = {2007-07-01},
keyword = {Conference},
title = {Descritor de Relacionamento Espacial baseado em Partições},
year = {2007}
}
|
Ferreira, Cristiano Dalmaschio
Image Retrieval with Relevance Feedback based on Genetic Programming (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Ferreira2007,
abstract = {Relevance Feedback has been used to incorporate the subjectivity of user visual perception in content-based image retrieval tasks. The relevance feedback process consists in the following steps: (i) showing a small set of images; (ii) indication of relevant or irrelevant images by the user; (iii) and finally, learning the user needs from her feedback, and selecting a new set of images to be showed. This procedure is repeated until the user is satisfied. This dissertation presents two content-based image retrieval frameworks with relevance feedback. These frameworks employ Genetic Programming to discover a combination of descriptors that characterize the user perception of image similarity. The use of genetic programming is motivated by its capability of exploring the search space, which deals with the major goal of the proposed frameworks: find, among all combination functions of descriptors, the one that best represents the user needs. Several experiments were conducted to validate the proposed frameworks. These experiments employed three different images databases and color, shape and texture descriptors to represent the content of database images. The proposed frameworks were compared with three other content-based image retrieval methods regarding their efficiency and effectiveness in the retrieval process. Experiment results demonstrate the superiority of the proposed methods. The contributions of this work are: (i) study of different relevance feedback techniques; (ii) proposal of two content-based image retrieval frameworks with relevance feedback, based on genetic programming; (ii) implementation of the proposed methods and their validation with several experiments, and comparison with other methods.},
author = {Cristiano Dalmaschio Ferreira},
date = {2007-07-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Dissertacao.pdf},
school = {Instituto de Computação - Unicamp},
title = {Image Retrieval with Relevance Feedback based on Genetic Programming},
year = {2007}
}
Relevance Feedback has been used to incorporate the subjectivity of user visual perception in content-based image retrieval tasks. The relevance feedback process consists in the following steps: (i) showing a small set of images; (ii) indication of relevant or irrelevant images by the user; (iii) and finally, learning the user needs from her feedback, and selecting a new set of images to be showed. This procedure is repeated until the user is satisfied. This dissertation presents two content-based image retrieval frameworks with relevance feedback. These frameworks employ Genetic Programming to discover a combination of descriptors that characterize the user perception of image similarity. The use of genetic programming is motivated by its capability of exploring the search space, which deals with the major goal of the proposed frameworks: find, among all combination functions of descriptors, the one that best represents the user needs. Several experiments were conducted to validate the proposed frameworks. These experiments employed three different images databases and color, shape and texture descriptors to represent the content of database images. The proposed frameworks were compared with three other content-based image retrieval methods regarding their efficiency and effectiveness in the retrieval process. Experiment results demonstrate the superiority of the proposed methods. The contributions of this work are: (i) study of different relevance feedback techniques; (ii) proposal of two content-based image retrieval frameworks with relevance feedback, based on genetic programming; (ii) implementation of the proposed methods and their validation with several experiments, and comparison with other methods.
|
Digiampietri, Luciano A.;
Pérez-Alcázar, José J.;
Medeiros, Claudia B.
AI Planning in Web Services Composition: a review of current approaches and a new solution (conference)
Proc. VI Encontro Nacional de Inteligencia Artificial (ENIA),
Rio de Janeiro, Brazil,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2007b,
abstract = {Web services represent a relevant technology for interoperability. An important step toward the development of applications based on Web services is the ability of selecting and integrating heterogeneous services from different sites. When there is no single service capable of performing a given task, there must be some way to adequately compose basic services to execute this task. The manual composition of Web services is complex and susceptible to errors because of the dynamic behavior and flexibility of the Web. This paper describes and compares AI planning solutions to Web service automatic composition. As a result of this comparison, it proposes an architecture that supports service composition, and which combines AI planning with workflow mechanisms.},
address = {Rio de Janeiro, Brazil},
author = {Luciano A. Digiampietri and José J. Pérez-Alcázar and Claudia B. Medeiros},
booktitle = {Proc. VI Encontro Nacional de Inteligencia Artificial (ENIA)},
date = {2007-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ArtigoENIA.pdf},
note = {ISBN 978-85-7669-116-7},
pages = {983-992},
title = {AI Planning in Web Services Composition: a review of current approaches and a new solution},
year = {2007}
}
Web services represent a relevant technology for interoperability. An important step toward the development of applications based on Web services is the ability of selecting and integrating heterogeneous services from different sites. When there is no single service capable of performing a given task, there must be some way to adequately compose basic services to execute this task. The manual composition of Web services is complex and susceptible to errors because of the dynamic behavior and flexibility of the Web. This paper describes and compares AI planning solutions to Web service automatic composition. As a result of this comparison, it proposes an architecture that supports service composition, and which combines AI planning with workflow mechanisms.
|
Daltio, Jaudete;
Medeiros, Claudia Bauzer
Um Serviço de Ontologias para Sistemas de Biodiversidade (An Ontology Service for Biodiversity Information Systems) (conference)
XXXIV SEMISH: Brazilian National CS Conference,
Rio de Janeiro, Brazil,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Daltio2007b,
abstract = {Biodiversity research requires associating data about living beings and their habitats, integrating from geographical features to domain specifications, often through ontologies. In this context are the so-called Biodiversity Information Systems, new management solutions that allow researchers to analyze species characteristics and their interactions. The goal of this project is to specify and develop an ontology web service that can be used for different biodiversity systems. The main contributions of this work are: specification of the requirements of an ontology service; and the specification and the implementation of an ontology server. This research is directly connected with the first challenge (management of large multimedia data volumes), and provides support to research in challenge 2 (computational modeling in complex systems).},
address = {Rio de Janeiro, Brazil},
author = {Jaudete Daltio and Claudia Bauzer Medeiros},
booktitle = {XXXIV SEMISH: Brazilian National CS Conference},
date = {2007-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SEMISH.pdf},
title = {Um Serviço de Ontologias para Sistemas de Biodiversidade (An Ontology Service for Biodiversity Information Systems)},
year = {2007}
}
Biodiversity research requires associating data about living beings and their habitats, integrating from geographical features to domain specifications, often through ontologies. In this context are the so-called Biodiversity Information Systems, new management solutions that allow researchers to analyze species characteristics and their interactions. The goal of this project is to specify and develop an ontology web service that can be used for different biodiversity systems. The main contributions of this work are: specification of the requirements of an ontology service; and the specification and the implementation of an ontology server. This research is directly connected with the first challenge (management of large multimedia data volumes), and provides support to research in challenge 2 (computational modeling in complex systems).
|
Murthy, U.;
Gourdon, D.;
Torres, R. da S.;
Goncalves, M. A.;
Fox, E. A.;
Delcambre, L.
Extending the 5S Digital Library (DL) Framework: From a Minimal DL towards a DL Reference Model (conference)
1st Workshop on Digital Library Foundations, ACM IEEE Joint Conference on Digital Libraries,
2007.
(
BibTeX |
Tags:
Conference
)
@conference{Murthy2007,
author = {U. Murthy and D. Gourdon and R. da S. Torres and M. A. Goncalves and E. A. Fox and L. Delcambre},
booktitle = {1st Workshop on Digital Library Foundations, ACM IEEE Joint Conference on Digital Libraries},
date = {2007-06-01},
keyword = {Conference},
title = {Extending the 5S Digital Library (DL) Framework: From a Minimal DL towards a DL Reference Model},
year = {2007}
}
|
Jr, Gilberto Zonta Pastorello;
Medeiros, Claudia Bauzer;
Santanchè, André
Providing homogeneous access for sensor data management (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-07-012,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Jr2007b,
abstract = {We are facing the proliferation of several kinds of sensing devices, from satellites to tiny sensors. This has opened up new possibilities for us to understand, manage and monitor a given environment, from the small -- e.g., a room -- to the large -- e.g., the planet. This, however, has added a new dimension to the classic problem of heterogeneous data management -- how to handle increasing volumes of sensing data from a wide range of sensors. This report is concerned with the problem of sensor data publication. Our solution involves the design and implementation of a framework for sensor data management, which applies technologies based on Semantic Web standards, components and scientific workflows. Individual sensors or networks are encapsulated into a specific kind of component -- DCC -- which supports homogeneous access to data and software. DCCs are themselves handled by scientific workflows that provide facilities for controlling data production, integration and publication. As a result, applications that require sensor data will instead interact with workflows, being liberated from concerns such as sensor particularities, or provide separate handlers for real time streams. The report also presents initial implementation results.},
author = {Gilberto Zonta Pastorello Jr and Claudia Bauzer Medeiros and André Santanchè},
date = {2007-05-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/IC-TR-07-12_pastorellojr.ps},
number = {IC-07-012},
title = {Providing homogeneous access for sensor data management},
type = {Technical Report},
year = {2007}
}
We are facing the proliferation of several kinds of sensing devices, from satellites to tiny sensors. This has opened up new possibilities for us to understand, manage and monitor a given environment, from the small -- e.g., a room -- to the large -- e.g., the planet. This, however, has added a new dimension to the classic problem of heterogeneous data management -- how to handle increasing volumes of sensing data from a wide range of sensors. This report is concerned with the problem of sensor data publication. Our solution involves the design and implementation of a framework for sensor data management, which applies technologies based on Semantic Web standards, components and scientific workflows. Individual sensors or networks are encapsulated into a specific kind of component -- DCC -- which supports homogeneous access to data and software. DCCs are themselves handled by scientific workflows that provide facilities for controlling data production, integration and publication. As a result, applications that require sensor data will instead interact with workflows, being liberated from concerns such as sensor particularities, or provide separate handlers for real time streams. The report also presents initial implementation results.
|
Jr, Luiz Celso Gomes
An architecture for querying biodiversity repositories on the Web (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Jr2007,
abstract = {Life on Earth forms a broad and complex network of interactions, which some experts estimate to be composed of up to 80 million different species. Tackling biodiversity is essentially a distributed effort. A research institution, no matter how big, can only deal with a small fraction of this variety. Therefore, to carry ecologically-relevant biodiversity research, one must collect chunks of information on species and their habitats from a large number of institutions and correlate them using geographic, biologic and ecological knowledge. Distribution and heterogeneity inherent to biodiversity data pose several challenges, such as how to find relevant information on the Web, how to solve syntactic and semantic heterogeneity, and how to process a variety of ecological and spatial predicates. This dissertation presents an architecture that exploits advances in data interoperability and semantic Web technologies to meet these challenges. The solution relies on ontologies and annotated repositories to support data sharing, discovery and collaborative biodiversity research. A prototype using real data has implemented part of the architecture.},
author = {Luiz Celso Gomes Jr},
date = {2007-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao.pdf},
school = {Instituto de Computação - Unicamp},
title = {An architecture for querying biodiversity repositories on the Web},
year = {2007}
}
Life on Earth forms a broad and complex network of interactions, which some experts estimate to be composed of up to 80 million different species. Tackling biodiversity is essentially a distributed effort. A research institution, no matter how big, can only deal with a small fraction of this variety. Therefore, to carry ecologically-relevant biodiversity research, one must collect chunks of information on species and their habitats from a large number of institutions and correlate them using geographic, biologic and ecological knowledge. Distribution and heterogeneity inherent to biodiversity data pose several challenges, such as how to find relevant information on the Web, how to solve syntactic and semantic heterogeneity, and how to process a variety of ecological and spatial predicates. This dissertation presents an architecture that exploits advances in data interoperability and semantic Web technologies to meet these challenges. The solution relies on ontologies and annotated repositories to support data sharing, discovery and collaborative biodiversity research. A prototype using real data has implemented part of the architecture.
|
Kondo, Andréia Akemi
Management of traceability in food supply chains (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Kondo2007,
abstract = {A supply chain is a set of activities developed starting from raw materials to final consumers. Supply chains present many research challenges in Computing, such as the modeling of their processes, communication problems between their components, logistics or process and product management. An issues of increasing importance is to enable traceability to ensure the origin and quality control of the products. However, little has been published on implementation aspects to solve this problem. Most papers are related to specific aspects, and do not strive for generic solutions. This work contributes to fill this gap, considering product, process and service traceability within a supply chain. The main contributions are a model for traceability data storage, supported by Web Services-based architecture. This work presents was validated by a prototype, whose tests shows the genericity of the solution.},
author = {Andréia Akemi Kondo},
date = {2007-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao-2.pdf},
school = {Instituto de Computação - Unicamp},
title = {Management of traceability in food supply chains},
year = {2007}
}
A supply chain is a set of activities developed starting from raw materials to final consumers. Supply chains present many research challenges in Computing, such as the modeling of their processes, communication problems between their components, logistics or process and product management. An issues of increasing importance is to enable traceability to ensure the origin and quality control of the products. However, little has been published on implementation aspects to solve this problem. Most papers are related to specific aspects, and do not strive for generic solutions. This work contributes to fill this gap, considering product, process and service traceability within a supply chain. The main contributions are a model for traceability data storage, supported by Web Services-based architecture. This work presents was validated by a prototype, whose tests shows the genericity of the solution.
|
Santanchè, André;
Medeiros, Claudia Bauzer;
Jr, Gilberto Zonta Pastorello
User-author centered multimedia building blocks (article)
Multimedia Systems Journal,
4-5,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Santanche2007,
abstract = {The advances of multimedia models and tools popularized the access and production of multimedia contents: in this new scenario, there is no longer a clear distinction between authors and end-users of a production. These user-authors often work in a collaborative way. As end-users, they collectively participate in interactive environments, consuming multimedia artifacts. In their authors’ role, instead of starting from scratch, they often reuse others’ productions, which can be decomposed, fusioned and transformed to meet their goals. Since the need for sharing and adapting productions is felt by many communities, there has been a proliferation of standards and mechanisms to exchange complex digital objects, for distinct application domains. However, these initiatives have created another level of complexity, since people have to define which share/ reuse solution they want to adopt, and may even have to resort to programming tasks. They also lack effective strategies to combine these reused artifacts. This paper presents a solution to this demand, based on a userauthor centered multimedia building block model—the digital content component (DCC). DCCs upgrade the notion of digital objects to digital components, as they homogenously wrap any kind of digital content (e.g., multimedia artifacts, software) inside a single component abstraction. The model is fully supported by a software infrastructure, which exploits the model’s semantic power to automate low level technical activities, thereby freeing user-authors to concentrate on creative tasks. Model and infrastructure improve recent research initiatives to standardize the means of sharing and reuse domain specific digital contents. The paper’s contributions are illustrated using examples implemented in a DCC-based authoring tool, in real life situations.},
author = {André Santanchè and Claudia Bauzer Medeiros and Gilberto Zonta Pastorello Jr},
date = {2007-03-01},
journal = {Multimedia Systems Journal},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/santanche07multsyst.pdf},
note = {DOI 10.1007/s00530-006-0050-0},
number = {4-5},
pages = {403-421},
title = {User-author centered multimedia building blocks},
volume = {12},
year = {2007}
}
The advances of multimedia models and tools popularized the access and production of multimedia contents: in this new scenario, there is no longer a clear distinction between authors and end-users of a production. These user-authors often work in a collaborative way. As end-users, they collectively participate in interactive environments, consuming multimedia artifacts. In their authors’ role, instead of starting from scratch, they often reuse others’ productions, which can be decomposed, fusioned and transformed to meet their goals. Since the need for sharing and adapting productions is felt by many communities, there has been a proliferation of standards and mechanisms to exchange complex digital objects, for distinct application domains. However, these initiatives have created another level of complexity, since people have to define which share/ reuse solution they want to adopt, and may even have to resort to programming tasks. They also lack effective strategies to combine these reused artifacts. This paper presents a solution to this demand, based on a userauthor centered multimedia building block model—the digital content component (DCC). DCCs upgrade the notion of digital objects to digital components, as they homogenously wrap any kind of digital content (e.g., multimedia artifacts, software) inside a single component abstraction. The model is fully supported by a software infrastructure, which exploits the model’s semantic power to automate low level technical activities, thereby freeing user-authors to concentrate on creative tasks. Model and infrastructure improve recent research initiatives to standardize the means of sharing and reuse domain specific digital contents. The paper’s contributions are illustrated using examples implemented in a DCC-based authoring tool, in real life situations.
|
Nakai, Alan Massaru
An Infrastructure based on Web Service Choreography for Activity Coordination in Supply Chains (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Nakai2007,
abstract = {A supply chain is the set of activities involved in the creation, transformation and distribution of a product, from raw material to the consumer. Supply chainś participants can work in an integrated way to optimize their performance and increase their commercial competitiveness. From the technological point of view, the distributed, autonomous and heterogeneous nature of supply chainś participants raises difficulties when we consider the automation of interorganizational processes. This work proposes an infrastructure based on Web services choreographies for coordination of the activities that compose the interorganizational business processes of the supply chains. This infrastructure implements a coordination model that aims to make easier the design and the deployment of the interorganizational business processes. In this model, processes are represented by WS-CDL choreographies, which are mapped to executable BPEL coordination plans. The work also presents a prototype of the infrastructure to validate it.},
author = {Alan Massaru Nakai},
date = {2007-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao-Nakai07.pdf},
school = {Instituto de Computação - Unicamp},
title = {An Infrastructure based on Web Service Choreography for Activity Coordination in Supply Chains},
year = {2007}
}
A supply chain is the set of activities involved in the creation, transformation and distribution of a product, from raw material to the consumer. Supply chainś participants can work in an integrated way to optimize their performance and increase their commercial competitiveness. From the technological point of view, the distributed, autonomous and heterogeneous nature of supply chainś participants raises difficulties when we consider the automation of interorganizational processes. This work proposes an infrastructure based on Web services choreographies for coordination of the activities that compose the interorganizational business processes of the supply chains. This infrastructure implements a coordination model that aims to make easier the design and the deployment of the interorganizational business processes. In this model, processes are represented by WS-CDL choreographies, which are mapped to executable BPEL coordination plans. The work also presents a prototype of the infrastructure to validate it.
|
Kondo, Andréia Akemi;
Medeiros, Claudia Bauzer;
Bacarin, Evandro;
Madeira, Edmundo Roberto Mauro
Traceability in Food for Supply Chains. (conference)
Proc. 3rd International Conference on Web Information Systems and Technologies (WEBIST),
INSTICC,
Barcelona, Spain,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Kondo2007b,
abstract = {Supply Chains present many research challenges in Computing, such as the modeling of theirs processes, communication problems between theirs components, logistics and processes management. This paper presents a supply chain traceability model that relies on a Web service-based architecture to ensure interoperability. Geared towards assisting quality control in the agricultural domain, the model allows to trace products, processes and services inside chain. The model has been validated for real life case studies and theWeb service implementation is under way.},
address = {Barcelona, Spain},
author = {Andréia Akemi Kondo and Claudia Bauzer Medeiros and Evandro Bacarin and Edmundo Roberto Mauro Madeira},
booktitle = {Proc. 3rd International Conference on Web Information Systems and Technologies (WEBIST)},
date = {2007-03-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/webist07.pdf},
pages = {121-127},
publisher = {INSTICC},
title = {Traceability in Food for Supply Chains.},
year = {2007}
}
Supply Chains present many research challenges in Computing, such as the modeling of theirs processes, communication problems between theirs components, logistics and processes management. This paper presents a supply chain traceability model that relies on a Web service-based architecture to ensure interoperability. Geared towards assisting quality control in the agricultural domain, the model allows to trace products, processes and services inside chain. The model has been validated for real life case studies and theWeb service implementation is under way.
|
Andaló, Fernanda Alcântara
Descritores de Forma baseados em Tensor Scale (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Andalo2007,
abstract = {In the past few years, the number of image collections available has increased. In this scenery, there is a demand for information systems for storing, indexing, and retrieving these images. One of the main adopted solutions is to use content-based image retrieval systems (CBIR), that have the ability to, for a given query image, return the most similar images stored in the database. To answer this kind of query, it is important to have an automated process for content characterization and, for this purpose, the CBIR systems use image descriptors based on color, texture and shape of the objects within the images. In this work, we propose shape descriptors based on Tensor Scale. Tensor Scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. Besides the shape descriptors based on this morphometric parameter, we present a study of algorithms for Tensor Scale computation. The main contributions of this work are: (i) study of image descriptors based on color, texture and shape descriptors; (ii) study of algorithms for Tensor Scale computation; (iii) proposal and implementation of a contour salience detector based on Tensor Scale; (iv) proposal and implementation of new shape descriptors based on Tensor Scale; and (v) validation of the proposed descriptors with regard to their use in content-based image retrieval systems, comparing them, experimentally, to other relevant shape descriptors, recently proposed.},
author = {Fernanda Alcântara Andaló},
date = {2007-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dissertacao-1.pdf},
school = {Instituto de Computação - Unicamp},
title = {Descritores de Forma baseados em Tensor Scale},
year = {2007}
}
In the past few years, the number of image collections available has increased. In this scenery, there is a demand for information systems for storing, indexing, and retrieving these images. One of the main adopted solutions is to use content-based image retrieval systems (CBIR), that have the ability to, for a given query image, return the most similar images stored in the database. To answer this kind of query, it is important to have an automated process for content characterization and, for this purpose, the CBIR systems use image descriptors based on color, texture and shape of the objects within the images. In this work, we propose shape descriptors based on Tensor Scale. Tensor Scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. Besides the shape descriptors based on this morphometric parameter, we present a study of algorithms for Tensor Scale computation. The main contributions of this work are: (i) study of image descriptors based on color, texture and shape descriptors; (ii) study of algorithms for Tensor Scale computation; (iii) proposal and implementation of a contour salience detector based on Tensor Scale; (iv) proposal and implementation of new shape descriptors based on Tensor Scale; and (v) validation of the proposed descriptors with regard to their use in content-based image retrieval systems, comparing them, experimentally, to other relevant shape descriptors, recently proposed.
|
Santanchè, André;
Medeiros, Claudia Bauzer
A Component Model and an Infrastructure for the Fluid Web. (article)
IEEE Transactions on Knowledge and Data Engineering,
2,
2007.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Santanche2007b,
abstract = {The Web is evolving from a space for publication/consumption of documents to an environment for collaborative work, where digital content can travel and be replicated, adapted, decomposed, fusioned, and transformed. We call this the Fluid Web perspective. This view requires a thorough revision of the typical document-oriented approach that permeates content management on the Web. This paper presents our solution for the Fluid Web, which allows moving from the document-oriented to a content-oriented perspective, where ̈content” can be any digital object. The solution is based on two axes: a self-descriptive unit to encapsulate any kind of content artifact—the Digital Content Component (DCC) and a Fluid Web infrastructure that provides management and deployment of DCCs through the Web, and whose goal is to support collaboration on the Web. Designed to be reused and adapted, DCCs encapsulate data and software using a single structure, thus allowing homogeneous composition and processing of any digital content, executable or not. These properties are exploited by our Fluid Web infrastructure, which supports DCC multilevel annotation and discovery mechanisms, configuration management, and version control. Our work extensively explores taxonomic ontologies and Semantic Web standards, which serve as a semantic bridge, unifying DCC management vocabularies, and improving DCC description/indexing/discovery. DCCs and infrastructure have been implemented and are illustrated by means of a running example, for a scientific application.},
author = {André Santanchè and Claudia Bauzer Medeiros},
date = {2007-02-01},
journal = {IEEE Transactions on Knowledge and Data Engineering},
keyword = {Article},
number = {2},
pages = {324-341},
title = {A Component Model and an Infrastructure for the Fluid Web.},
volume = {19},
year = {2007}
}
The Web is evolving from a space for publication/consumption of documents to an environment for collaborative work, where digital content can travel and be replicated, adapted, decomposed, fusioned, and transformed. We call this the Fluid Web perspective. This view requires a thorough revision of the typical document-oriented approach that permeates content management on the Web. This paper presents our solution for the Fluid Web, which allows moving from the document-oriented to a content-oriented perspective, where ̈content” can be any digital object. The solution is based on two axes: a self-descriptive unit to encapsulate any kind of content artifact—the Digital Content Component (DCC) and a Fluid Web infrastructure that provides management and deployment of DCCs through the Web, and whose goal is to support collaboration on the Web. Designed to be reused and adapted, DCCs encapsulate data and software using a single structure, thus allowing homogeneous composition and processing of any digital content, executable or not. These properties are exploited by our Fluid Web infrastructure, which supports DCC multilevel annotation and discovery mechanisms, configuration management, and version control. Our work extensively explores taxonomic ontologies and Semantic Web standards, which serve as a semantic bridge, unifying DCC management vocabularies, and improving DCC description/indexing/discovery. DCCs and infrastructure have been implemented and are illustrated by means of a running example, for a scientific application.
|
Ferreira, Cristiano D.;
Torres, Ricardo da S.
Image retrieval with relevance feedback based on genetic programming (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-07-05,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Ferreira2007b,
abstract = {In the last years, large digital image collections are generated, manipulated, and stored in databases. In this scenery, it is very important to develop mechanisms to provide automatic means to retrieve images in an efficient and effective way. However, the subjectivity of the user perception of an image usually hampers a fully automatic search and retrieval. Relevance Feedback is one of the commonest approaches to overcome this difficult. In this paper, a new content-based image retrieval framework with relevance feedback is proposed. This framework uses Genetic Programming (GP) to learn the user needs. The objective of this learning method is to find a function that combines different values of similarity, from distinct descriptors, and best encodes the user perception of image similarity. Several experiments are performed to validate the proposed method, aiming to compare our work with other relevance feedback techniques. The experiment results show that the proposed method outperforms all of them.},
author = {Cristiano D. Ferreira and Ricardo da S. Torres},
date = {2007-02-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/07-05.pdf},
number = {IC-07-05},
title = {Image retrieval with relevance feedback based on genetic programming},
type = {Technical Report},
year = {2007}
}
In the last years, large digital image collections are generated, manipulated, and stored in databases. In this scenery, it is very important to develop mechanisms to provide automatic means to retrieve images in an efficient and effective way. However, the subjectivity of the user perception of an image usually hampers a fully automatic search and retrieval. Relevance Feedback is one of the commonest approaches to overcome this difficult. In this paper, a new content-based image retrieval framework with relevance feedback is proposed. This framework uses Genetic Programming (GP) to learn the user needs. The objective of this learning method is to find a function that combines different values of similarity, from distinct descriptors, and best encodes the user perception of image similarity. Several experiments are performed to validate the proposed method, aiming to compare our work with other relevance feedback techniques. The experiment results show that the proposed method outperforms all of them.
|
Torres, R. da S.;
Falcão, A. X.
Contour Salience Descriptors for Effective Image Retrieval and Analysis (article)
Image and Vision Computing,
1,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{daTorres2007,
abstract = {This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.},
author = {R. da S. Torres and A. X. Falcão},
date = {2007-01-01},
journal = {Image and Vision Computing},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres07ivc.pdf},
number = {1},
pages = {3-13},
title = {Contour Salience Descriptors for Effective Image Retrieval and Analysis},
volume = {25},
year = {2007}
}
This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.
|
Zegarra, J. A. M.;
Papa, J. P.;
Leite, N. J.;
Torres, R. da S.;
Falcão, A. X.
Rotation-invariant Texture Recognition (conference)
International Symposium on Visual Computing (ISVC),
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Zegarra2007,
abstract = {This paper proposes a new texture classification system, which is distinguished by: (1) a new rotation-invariant image descriptor based on Steerable Pyramid Decomposition, and (2) by a novel multi-class recognition method based on Optimum Path Forest. By combining the discriminating power of our image descriptor and classifier, our system uses small size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz dataset. High classification rates demonstrate the superiority of the proposed method.},
author = {J. A. M. Zegarra and J. P. Papa and N. J. Leite and R. da S. Torres and A. X. Falcão},
booktitle = {International Symposium on Visual Computing (ISVC)},
date = {2007-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zegarra07isvc.pdf},
title = {Rotation-invariant Texture Recognition},
year = {2007}
}
This paper proposes a new texture classification system, which is distinguished by: (1) a new rotation-invariant image descriptor based on Steerable Pyramid Decomposition, and (2) by a novel multi-class recognition method based on Optimum Path Forest. By combining the discriminating power of our image descriptor and classifier, our system uses small size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz dataset. High classification rates demonstrate the superiority of the proposed method.
|
Mariote, Leonardo;
Medeiros, Claudia Bauzer;
Torres, Ricardo
Diagnosing Similarity of Oscillation Trends in Time Series (conference)
International Workshop on spatial and spatio-temporal data mining - SSTDM,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Mariote2007,
abstract = {Sensor networks have increased the amount and variety of temporal data available, requiring the definition of new techniques for data mining. Related research typically addresses the problems of indexing, clustering, classification, summarization, and anomaly detection. They present many ways for describing and comparing time series, but they focus on their values. This paper concentrates on a new aspect - that of describing oscillation patterns. It presents a technique for time series similarity search, based on multiple temporal scales, defining a descriptor that uses the angular coefficients from a linear segmentation of the curve that represents the evolution of the analyzed series. Preliminary experiments with real datasets showed that our approach correctly characterizes the oscillation of time series.},
author = {Leonardo Mariote and Claudia Bauzer Medeiros and Ricardo Torres},
booktitle = {International Workshop on spatial and spatio-temporal data mining - SSTDM},
date = {2007-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/mariote-oscillationTrends.pdf},
pages = {643-648},
title = {Diagnosing Similarity of Oscillation Trends in Time Series},
year = {2007}
}
Sensor networks have increased the amount and variety of temporal data available, requiring the definition of new techniques for data mining. Related research typically addresses the problems of indexing, clustering, classification, summarization, and anomaly detection. They present many ways for describing and comparing time series, but they focus on their values. This paper concentrates on a new aspect - that of describing oscillation patterns. It presents a technique for time series similarity search, based on multiple temporal scales, defining a descriptor that uses the angular coefficients from a linear segmentation of the curve that represents the evolution of the analyzed series. Preliminary experiments with real datasets showed that our approach correctly characterizes the oscillation of time series.
|
Kim, S.;
Fox, E. A.;
Fan, W.;
North, C.;
Tatar, D.;
Torres, R. da S.
Design and Evaluation of Techniques to Utilize Implicit rating Data in Complex Information Systems (Technical Report)
Computer Science Department, Virginia Tech,
Technical Report,
TR-07-20,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Kim2007,
abstract = {Research in personalization, including recommender systems, focuses on applications such as in online shopping malls and simple information systems. These systems consider user profile and item information obtained from data explicitly entered by users - where it is possible to classify items involved and to make personalization based on a direct mapping from user or user group to item or item group. However, in complex, dynamic, and professional information systems, such as Digital Libraries, additional capabilities are needed to achieve personalization to support their distinctive features: large numbers of digital objects, dynamic updates, sparse rating data, biased rating data on specific items, and challenges in getting explicit rating data from users. In this report, we present techniques for collecting, storing, processing, and utilizing implicit rating data of Digital Libraries for analysis and decision support. We present our pilot study to find virtual user groups using implicit rating data. We demonstrate the effectiveness of implicit rating data for characterizing users and finding virtual user communities, through statistical hypothesis testing. Further, we describe a visual data mining tool named VUDM (Visual User model Data Mining tool) that utilizes implicit rating data. We provide the results of formative evaluation of VUDM and discuss the problems raised and plans for further studies.},
author = {S. Kim and E. A. Fox and W. Fan and C. North and D. Tatar and R. da S. Torres},
date = {2007-01-01},
institution = {Computer Science Department, Virginia Tech},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/kim07tr.pdf},
number = {TR-07-20},
title = {Design and Evaluation of Techniques to Utilize Implicit rating Data in Complex Information Systems},
type = {Technical Report},
year = {2007}
}
Research in personalization, including recommender systems, focuses on applications such as in online shopping malls and simple information systems. These systems consider user profile and item information obtained from data explicitly entered by users - where it is possible to classify items involved and to make personalization based on a direct mapping from user or user group to item or item group. However, in complex, dynamic, and professional information systems, such as Digital Libraries, additional capabilities are needed to achieve personalization to support their distinctive features: large numbers of digital objects, dynamic updates, sparse rating data, biased rating data on specific items, and challenges in getting explicit rating data from users. In this report, we present techniques for collecting, storing, processing, and utilizing implicit rating data of Digital Libraries for analysis and decision support. We present our pilot study to find virtual user groups using implicit rating data. We demonstrate the effectiveness of implicit rating data for characterizing users and finding virtual user communities, through statistical hypothesis testing. Further, we describe a visual data mining tool named VUDM (Visual User model Data Mining tool) that utilizes implicit rating data. We provide the results of formative evaluation of VUDM and discuss the problems raised and plans for further studies.
|
Jr, Luiz Celso Gomes;
Medeiros, Claudia Bauzer
Ecologically-aware Queries for Biodiversity Research (conference)
Proceedings GeoInfo - Brazilian Geoinformatics Symposium,
INPE - SBC,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2007b,
abstract = {To carry ecologically-relevant biodiversity research, one must collect chunks of information on species and their habitats from a large number of institutions and correlate them using geographic, biologic and ecological knowledge. Distribution and heterogeneity inherent to biodiversity data pose several challenges, such as how to find and merge relevant information on the Web, and process a variety of ecological and spatial predicates. This paper presents a framework that exploits advances in data interoperability and Semantic Web technologies to meet these challenges. The solution relies on ontologies and annotated repositories to support data sharing, discovery and collaborative biodiversity research. A prototype using real data has implemented part of the framework.},
author = {Luiz Celso Gomes Jr and Claudia Bauzer Medeiros},
booktitle = {Proceedings GeoInfo - Brazilian Geoinformatics Symposium},
date = {2007-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/gomesmedeiros-geoinfo07.pdf},
publisher = {INPE - SBC},
title = {Ecologically-aware Queries for Biodiversity Research},
year = {2007}
}
To carry ecologically-relevant biodiversity research, one must collect chunks of information on species and their habitats from a large number of institutions and correlate them using geographic, biologic and ecological knowledge. Distribution and heterogeneity inherent to biodiversity data pose several challenges, such as how to find and merge relevant information on the Web, and process a variety of ecological and spatial predicates. This paper presents a framework that exploits advances in data interoperability and Semantic Web technologies to meet these challenges. The solution relies on ontologies and annotated repositories to support data sharing, discovery and collaborative biodiversity research. A prototype using real data has implemented part of the framework.
|
Digiampietri, Luciano Antonio;
Pérez-Alcázar, José de J.;
Medeiros, Claudia Bauzer
An ontology-based framework for bioinformatics workflows (article)
International Journal of Bioinformatics Research and Applications,
3,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Digiampietri2007b,
abstract = {Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments. The resulting framework takes advantage of ontologies to support the specification and annotation of bioinformatics workflows, and to serve as the basis for tracking data provenance. Moreover, it uses AI planning techniques to support automatic or interactive composition of tasks. These ideas have been implemented in a prototype and validated on real bioinformatics data.},
author = {Luciano Antonio Digiampietri and José de J. Pérez-Alcázar and Claudia Bauzer Medeiros},
date = {2007-01-01},
journal = {International Journal of Bioinformatics Research and Applications},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/IJBRA-3302-Digiampietri-et-al.pdf},
note = {Special Issue on "Ontologies for Bioinformatics", ISSN 1744-5485},
number = {3},
pages = {268-285},
title = {An ontology-based framework for bioinformatics workflows},
volume = {3},
year = {2007}
}
Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments. The resulting framework takes advantage of ontologies to support the specification and annotation of bioinformatics workflows, and to serve as the basis for tracking data provenance. Moreover, it uses AI planning techniques to support automatic or interactive composition of tasks. These ideas have been implemented in a prototype and validated on real bioinformatics data.
|
Borges, Karla A. V.;
Laender, Alberto H. F.;
Medeiros, Claudia Bauzer;
Jr, Clodoveu Davis
Discovering Geographic Locations in Web Pages Using Urban Addresses (conference)
IV Workshop on Geographic Information Retrieval,
ACM,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Borges2007,
abstract = {This paper presents an approach that helps to discover geographic locations from the recognition, extraction, and geocoding of urban addresses found in Web pages. Experiments that evaluate the presence and incidence of urban addresses in Web pages are de- scribed. Experimental results, based on a collection of over 4 mil- lion documents from the Brazilian Web, show the feasibility and effectiveness of the proposed method.},
author = {Karla A. V. Borges and Alberto H. F. Laender and Claudia Bauzer Medeiros and Clodoveu Davis Jr},
booktitle = {IV Workshop on Geographic Information Retrieval},
date = {2007-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/gir-l05-borges.pdf},
note = {co-located with CIKM 2007},
publisher = {ACM},
title = {Discovering Geographic Locations in Web Pages Using Urban Addresses},
year = {2007}
}
This paper presents an approach that helps to discover geographic locations from the recognition, extraction, and geocoding of urban addresses found in Web pages. Experiments that evaluate the presence and incidence of urban addresses in Web pages are de- scribed. Experimental results, based on a collection of over 4 mil- lion documents from the Brazilian Web, show the feasibility and effectiveness of the proposed method.
|
Bacarin, Evandro;
Madeira, Edmundo R. M.;
Medeiros, Claudia M. B.
Using choreography to support collaboration in agricultural supply chains (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
RT-07-07,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Bacarin2007b,
abstract = {This paper presents an approach to support choreography in agricultural supply chains. It depicts a model for this kind of chain that considers both static and dynamic aspects, and their mapping to an underlying architecture. In particular, the model emphasizes mutual agreements, coordination of activities, quality enforcement and activity documentation. The architecture is centered on mapping chain elements to Web Services and their dynamics to the choreography of services. A case study, for soy supply chains, is used to motivate the approach.},
author = {Evandro Bacarin and Edmundo R. M. Madeira and Claudia M. B. Medeiros},
date = {2007-01-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/rt-07-07.pdf},
number = {RT-07-07},
title = {Using choreography to support collaboration in agricultural supply chains},
type = {Technical Report},
year = {2007}
}
This paper presents an approach to support choreography in agricultural supply chains. It depicts a model for this kind of chain that considers both static and dynamic aspects, and their mapping to an underlying architecture. In particular, the model emphasizes mutual agreements, coordination of activities, quality enforcement and activity documentation. The architecture is centered on mapping chain elements to Web Services and their dynamics to the choreography of services. A case study, for soy supply chains, is used to motivate the approach.
|
Bacarin, E.;
Aalst, W.M.P van der;
Madeira, E.;
Medeiros, C.B.
Towards Modeling and Simulating a Multi-party Negotiation Protocol with Colored Petri Nets (conference)
Proc. CPN 07 - Eighth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools,
2007.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Bacarin2007,
abstract = {E-contracting, i.e., establishing and enacting electronic contracts, has become important because of technological advances (e.g., the availability of web services) and more open markets. However, the establishment of an e-contract is complicated and error prone. There are multiple negotiation styles ranging from auctions to bilateral bargaining. This paper provides an approach for modeling multi-party negotiation protocols in colored Petri nets. It is shown how different negotiation styles can be modeled in a unified and consistent way. Moreover, CPN Tools is used to analyze the resulting colored Petri nets. Simulation can be used for both validation and performance analysis, while state-space analysis can be used to discover anomalies in various multi-part negotiation protocols.},
author = {E. Bacarin and W.M.P van der Aalst and E. Madeira and C.B. Medeiros},
booktitle = {Proc. CPN 07 - Eighth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools},
date = {2007-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/cpn07-final-bacarin-aalst-madeira-medeiros.pdf},
title = {Towards Modeling and Simulating a Multi-party Negotiation Protocol with Colored Petri Nets},
year = {2007}
}
E-contracting, i.e., establishing and enacting electronic contracts, has become important because of technological advances (e.g., the availability of web services) and more open markets. However, the establishment of an e-contract is complicated and error prone. There are multiple negotiation styles ranging from auctions to bilateral bargaining. This paper provides an approach for modeling multi-party negotiation protocols in colored Petri nets. It is shown how different negotiation styles can be modeled in a unified and consistent way. Moreover, CPN Tools is used to analyze the resulting colored Petri nets. Simulation can be used for both validation and performance analysis, while state-space analysis can be used to discover anomalies in various multi-part negotiation protocols.
|
2006 |
Martins, Rodrigo Grassi
A Proposal for the Database of the WebMaps Project (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Martins2006,
abstract = {The goal of the WebMaps project is the specification and development of a WEB information system to support crop planning and monitoring in Brazil. This kind of project involves state-of-the art research all over the world. One of the problems faced by WebMaps is database design. This work attacks this issue, discussing the project´s needs and proposing a basic database that supports management of users, properties and parcels, as well as other kinds of data, especially satellite images. The main contributions of this work are: specification of a spatio-temporal database model; specification of sets of temporal, spatial and spatio-temporal queries; and the implementation of a prototype, in Postgresql/Postgis.},
author = {Rodrigo Grassi Martins},
date = {2006-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/MartinsRodrigoGrassi.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Proposal for the Database of the WebMaps Project},
year = {2006}
}
The goal of the WebMaps project is the specification and development of a WEB information system to support crop planning and monitoring in Brazil. This kind of project involves state-of-the art research all over the world. One of the problems faced by WebMaps is database design. This work attacks this issue, discussing the project´s needs and proposing a basic database that supports management of users, properties and parcels, as well as other kinds of data, especially satellite images. The main contributions of this work are: specification of a spatio-temporal database model; specification of sets of temporal, spatial and spatio-temporal queries; and the implementation of a prototype, in Postgresql/Postgis.
|
Sasaoka, Liliana Kasumi;
Medeiros, Claudia Bauzer
Access Control in Geographic Databases. (conference)
Proc 3rd International Workshop on Conceptual Modeling for Geographic Information Systems (CoMoGIS2006),
Tucson, Arizona,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Sasaoka2006,
abstract = {The problem of access control in databases consists of determining when (and if) users or applications can access stored data, and what kind of access they are allowed. This paper discusses this problem for geographic databases, where constraints imposed on access control management must consider the spatial location context. The model and solution provided are motivated by problems found in AM/FM applications developed in the management of telephone infrastructure in Brazil, in a real life situation.},
address = {Tucson, Arizona},
author = {Liliana Kasumi Sasaoka and Claudia Bauzer Medeiros},
booktitle = {Proc 3rd International Workshop on Conceptual Modeling for Geographic Information Systems (CoMoGIS2006)},
date = {2006-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sasaokamedeiros06.pdf},
note = {Tucson, Arizona},
title = {Access Control in Geographic Databases.},
year = {2006}
}
The problem of access control in databases consists of determining when (and if) users or applications can access stored data, and what kind of access they are allowed. This paper discusses this problem for geographic databases, where constraints imposed on access control management must consider the spatial location context. The model and solution provided are motivated by problems found in AM/FM applications developed in the management of telephone infrastructure in Brazil, in a real life situation.
|
Zegarra, J. A. M.;
Leite, Neucimar J.;
Torres, Ricardo da Silva
Multiresolution Features for Fingerprint Image Retrieval. (conference)
Workshop of Theses and Dissertations, XIX Brazilian Symposium on Computer Graphics and Image Processing,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Zegarra2006b,
abstract = {This paper presents a real time system to guide the search and the retrieval in fingerprint image databases considering both retrieval accuracy and speed. For that purposes, we use multiresolution-based feature extraction and indexing methods considering the textural information inherent to fingerprint images. The extracted feature vectors are used to compute the distance between the fingerprint query image to all the fingerprints in the database and the N most similar images are then retrieved. The focus of this work is to study the utility of multiresolution transforms on the domain of fingerprint recognition.},
author = {J. A. M. Zegarra and Neucimar J. Leite and Ricardo da Silva Torres},
booktitle = {Workshop of Theses and Dissertations, XIX Brazilian Symposium on Computer Graphics and Image Processing},
date = {2006-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zegarra06sibgrapi.pdf},
note = {Manaus, Brazil},
title = {Multiresolution Features for Fingerprint Image Retrieval.},
year = {2006}
}
This paper presents a real time system to guide the search and the retrieval in fingerprint image databases considering both retrieval accuracy and speed. For that purposes, we use multiresolution-based feature extraction and indexing methods considering the textural information inherent to fingerprint images. The extracted feature vectors are used to compute the distance between the fingerprint query image to all the fingerprints in the database and the N most similar images are then retrieved. The focus of this work is to study the utility of multiresolution transforms on the domain of fingerprint recognition.
|
Zegarra, J. A. M.;
Leite, Neucimar J.;
Torres, Ricardo da Silva
Efficient and Effective Content-based Image Retrieval Framework for Fingerprint Databases. (conference)
V Workshop of Theses and Dissertations, XXI Brazilian Symposium on Databases,
Florianópolis, Brazil,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Zegarra2006,
abstract = {Two kinds of fingerprint identification approaches have been proposed in the literature to reduce the number of one-to-many comparisons during fingerprint image retrieval, namely, exclusive and continuous classification. Although exclusive classification approaches reduce the number of comparisons, they present some shortcomings, including fingerprint ambiguous classification, and unbalanced fingerprint classification distribution. On the other side, continuous classification approaches have not been further studied. In this context, we propose an original continuous approach to guide the search and the retrieval in fingerprint image databases considering both effectiveness and retrieval speed. For that purposes, we use feature extraction and indexing methods considering the textural and directional information contained in fingerprint images. Preliminary results of our work involves a comparative study of several textural image descriptors obtained by combining different types of the Wavelet Transform with similarity measures. From our experiments we can conclude that the best retrieval accuracy was achieved by combining Gabor Wavelets (GWs) with the Square Chord similarity measure. Furthermore, the presence of noise and distortions in fingerprint images have affected the overall retrieval accuracy.},
address = {Florianópolis, Brazil},
author = {J. A. M. Zegarra and Neucimar J. Leite and Ricardo da Silva Torres},
booktitle = {V Workshop of Theses and Dissertations, XXI Brazilian Symposium on Databases},
date = {2006-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zegarra06sbbd.pdf},
note = {Florianópolis, Brazil},
title = {Efficient and Effective Content-based Image Retrieval Framework for Fingerprint Databases.},
year = {2006}
}
Two kinds of fingerprint identification approaches have been proposed in the literature to reduce the number of one-to-many comparisons during fingerprint image retrieval, namely, exclusive and continuous classification. Although exclusive classification approaches reduce the number of comparisons, they present some shortcomings, including fingerprint ambiguous classification, and unbalanced fingerprint classification distribution. On the other side, continuous classification approaches have not been further studied. In this context, we propose an original continuous approach to guide the search and the retrieval in fingerprint image databases considering both effectiveness and retrieval speed. For that purposes, we use feature extraction and indexing methods considering the textural and directional information contained in fingerprint images. Preliminary results of our work involves a comparative study of several textural image descriptors obtained by combining different types of the Wavelet Transform with similarity measures. From our experiments we can conclude that the best retrieval accuracy was achieved by combining Gabor Wavelets (GWs) with the Square Chord similarity measure. Furthermore, the presence of noise and distortions in fingerprint images have affected the overall retrieval accuracy.
|
Digiampietri, Luciano Antonio;
Setubal, Joao Carlos;
Medeiros, Claudia Bauzer
Bioinformatics scientific workflows: combining databases, AI and Web services (conference)
Proceedings of the V Workshop Thesis and Dissertations on Databases,
Florianópolis, SC, Brazil,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2006,
abstract = {Bioinformatics activities present new challenges, such as how to exchange and reuse successful experimental procedures, tools and data, and how to understand and provide interoperability among data and tools across different sites, for distinct user profiles. This thesis is an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments. The resulting framework allows the integration of heterogeneous data and tools and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.},
address = {Florianópolis, SC, Brazil},
author = {Luciano Antonio Digiampietri and Joao Carlos Setubal and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the V Workshop Thesis and Dissertations on Databases},
date = {2006-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2006_WTDBD2006.pdf},
title = {Bioinformatics scientific workflows: combining databases, AI and Web services},
year = {2006}
}
Bioinformatics activities present new challenges, such as how to exchange and reuse successful experimental procedures, tools and data, and how to understand and provide interoperability among data and tools across different sites, for distinct user profiles. This thesis is an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments. The resulting framework allows the integration of heterogeneous data and tools and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.
|
Daltio, J.;
Medeiros, C. B.
Um Servidor de Ontologias para apoio a Sistemas de Biodiversidade (An Ontology Server to support Biodiversity Information Systems) (conference)
V Workshop Thesis and Dissertations on Databases, Proc. XXI Brazilian Symposium on Databases (SBBD 2006),
Florianópolis, Brazil,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Daltio2006,
abstract = {Biodiversity research requires associating data about living beings and their habitats, integrating from geographical features to domain specifications, often through ontologies. In this context are the so-called Biodiversity Information Systems, new management solutions that allow researchers to analyze species characteristics and their interactions. The goal of this project is to specify and develop an ontology web service that can be used for different biodiversity systems. The main contributions of this work are: specification of the requirements of an ontology service; and the specification and the implementation of an ontology server.},
address = {Florianópolis, Brazil},
author = {J. Daltio and C. B. Medeiros},
booktitle = {V Workshop Thesis and Dissertations on Databases, Proc. XXI Brazilian Symposium on Databases (SBBD 2006)},
date = {2006-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WTDBD06pdf.pdf},
pages = {71-76},
title = {Um Servidor de Ontologias para apoio a Sistemas de Biodiversidade (An Ontology Server to support Biodiversity Information Systems)},
year = {2006}
}
Biodiversity research requires associating data about living beings and their habitats, integrating from geographical features to domain specifications, often through ontologies. In this context are the so-called Biodiversity Information Systems, new management solutions that allow researchers to analyze species characteristics and their interactions. The goal of this project is to specify and develop an ontology web service that can be used for different biodiversity systems. The main contributions of this work are: specification of the requirements of an ontology service; and the specification and the implementation of an ontology server.
|
Schimiguel, Juliano
Um framework para a avaliação de interfaces de aplicações SIG Web no dominio agricola (phdthesis)
Instituto de Computação - Universidade Estadual de Campinas (UNICAMP),
Campinas - SP,
phdthesis,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Interfaces de usuario (Sistema de computador) - Avaliação, Sistemas de informação geografica
)
@phdthesis{schimiguel,
abstract = {Sistemas de Informação Geográfica (SIG) são categorias de software que permitem a manipulação, gerenciamento e visualização de dados geo referenciados. O termo georeferenciado denota associação a um sistema de coordenadas geográficas. Existem inúmeras categorias de aplicações SIG. em diferentes escalas e domínios, abrangendo desde temas urbanos até ambientais. Aplicações de Sistemas de Informação Geográfica na Web, neste trabalho denominadas "aplicações SIG Web", são sistemas onde a 'informação geográfica' pode estar dispersa em diferentes locais e sua manipulação via SIG ocorre através da Internet. A importância de SIG Web direcionados a sistemas agrícolas, foco deste trabalho, advém do fato de funcionarem como um ferramental útil para usuários que trabalham direta ou indiretamente no domínio: agricultores, agrônomos, cooperativas agrícolas, órgãos governamentais ligados à área. Interfaces de Usuário em SIG Web têm sido desenvolvidas sem o uso de práticas e critérios que considerem especificidades desse domínio de aplicação e a diversidade de usuários na web. A qualidade da interface dessas aplicações influencia diretamente o seu uso. Este trabalho se propõe a conceituar qualidade no contexto de interfaces de aplicações SIG Web. investigando tanto o produto - a interface de aplicações SIG Web - quanto o processo de design de interfaces de tais aplicações. Estas duas perspectivas formam a base de recomendações para a avaliação de suas interfaces de aplicações. O resultado principal do trabalho é a definição de um framework de bases semióticas para orientar designers e partes interessadas no design de aplicações SIG Web na avaliação de interfaces de tais aplicações. Esse framework organiza um espaço de análise que contém as recomendações identificadas nos contextos de avaliação do produto e de processo de design de aplicações SIG Web. Ele foi desenvolvido e testado utilizando um conjunto de aplicações e estudos de caso reais, no domínio agrícola},
address = {Campinas - SP},
author = {Juliano Schimiguel},
date = {2006-09-28},
keyword = {Interfaces de usuario (Sistema de computador) - Avaliação, Sistemas de informação geografica},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SchimiguelJuliano-ilovepdf-compressed.pdf},
school = {Instituto de Computação - Universidade Estadual de Campinas (UNICAMP)},
title = {Um framework para a avaliação de interfaces de aplicações SIG Web no dominio agricola},
year = {2006}
}
Sistemas de Informação Geográfica (SIG) são categorias de software que permitem a manipulação, gerenciamento e visualização de dados geo referenciados. O termo georeferenciado denota associação a um sistema de coordenadas geográficas. Existem inúmeras categorias de aplicações SIG. em diferentes escalas e domínios, abrangendo desde temas urbanos até ambientais. Aplicações de Sistemas de Informação Geográfica na Web, neste trabalho denominadas 'aplicações SIG Web', são sistemas onde a 'informação geográfica' pode estar dispersa em diferentes locais e sua manipulação via SIG ocorre através da Internet. A importância de SIG Web direcionados a sistemas agrícolas, foco deste trabalho, advém do fato de funcionarem como um ferramental útil para usuários que trabalham direta ou indiretamente no domínio: agricultores, agrônomos, cooperativas agrícolas, órgãos governamentais ligados à área. Interfaces de Usuário em SIG Web têm sido desenvolvidas sem o uso de práticas e critérios que considerem especificidades desse domínio de aplicação e a diversidade de usuários na web. A qualidade da interface dessas aplicações influencia diretamente o seu uso. Este trabalho se propõe a conceituar qualidade no contexto de interfaces de aplicações SIG Web. investigando tanto o produto - a interface de aplicações SIG Web - quanto o processo de design de interfaces de tais aplicações. Estas duas perspectivas formam a base de recomendações para a avaliação de suas interfaces de aplicações. O resultado principal do trabalho é a definição de um framework de bases semióticas para orientar designers e partes interessadas no design de aplicações SIG Web na avaliação de interfaces de tais aplicações. Esse framework organiza um espaço de análise que contém as recomendações identificadas nos contextos de avaliação do produto e de processo de design de aplicações SIG Web. Ele foi desenvolvido e testado utilizando um conjunto de aplicações e estudos de caso reais, no domínio agrícola
|
Zegarra, J. A. M.;
Leite, N. J.;
Torres, R. da S.
Wavelet-based Feature Extraction for Fingerprint Image Retrieval (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-06-12,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Zegarra2006b,
abstract = {This paper presents a novel approach to fingerprint retrieval for personal identification by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system. We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the Wavelet domain. These feature vectors are used to retrieve the most similar fingerprints given a query image, while their indexation is used to reduce the search spaces of image candidates. The different types of Wavelets used in our study include: Gabor Wavelets (GWs), Tree-Structured Wavelet Decomposition using both Orthogonal Filter Banks (TOWT) and Bi-orthogonal Filter Banks (TBOWT), as well as the Steerable Wavelets. To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were used. Experiments also evaluated different combinations of Wavelets with six similarity measures. The results show that the Gabor Wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.},
author = {J. A. M. Zegarra and N. J. Leite and R. da S. Torres},
date = {2006-09-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zegarra06tr.pdf},
number = {IC-06-12},
title = {Wavelet-based Feature Extraction for Fingerprint Image Retrieval},
type = {Technical Report},
year = {2006}
}
This paper presents a novel approach to fingerprint retrieval for personal identification by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system. We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the Wavelet domain. These feature vectors are used to retrieve the most similar fingerprints given a query image, while their indexation is used to reduce the search spaces of image candidates. The different types of Wavelets used in our study include: Gabor Wavelets (GWs), Tree-Structured Wavelet Decomposition using both Orthogonal Filter Banks (TOWT) and Bi-orthogonal Filter Banks (TBOWT), as well as the Steerable Wavelets. To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were used. Experiments also evaluated different combinations of Wavelets with six similarity measures. The results show that the Gabor Wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.
|
Santanchè, André
The Fluid Web and Digital Content Components: from a document-centered to a content-centered view (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Santanche2006,
abstract = {The Web is evolving from a space for publication/consumption of documents to an environment for collaborative work, where digital content can travel and be replicated, adapted, decomposed, fusioned and transformed. We call this the Fluid Web perspective. This view requires a thorough revision of the typical document-oriented approach that permeates content management on the Web. This thesis presents our solution for the Fluid Web, which allows moving from the document-oriented to a content-oriented perspective, where ̈contenẗ can be any digital object. The solution is based on two axes: a self-descriptive unit to encapsulate any kind of content artifact - the Digital Content Component (DCC); and a Fluid Web infrastructure that provides management and deployment of DCCs through the Web, and whose goal is to support collaboration on the Web.},
author = {André Santanchè},
date = {2006-08-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tese-andre-santanche.pdf},
school = {Instituto de Computação - Unicamp},
title = {The Fluid Web and Digital Content Components: from a document-centered to a content-centered view},
year = {2006}
}
The Web is evolving from a space for publication/consumption of documents to an environment for collaborative work, where digital content can travel and be replicated, adapted, decomposed, fusioned and transformed. We call this the Fluid Web perspective. This view requires a thorough revision of the typical document-oriented approach that permeates content management on the Web. This thesis presents our solution for the Fluid Web, which allows moving from the document-oriented to a content-oriented perspective, where ̈contenẗ can be any digital object. The solution is based on two axes: a self-descriptive unit to encapsulate any kind of content artifact - the Digital Content Component (DCC); and a Fluid Web infrastructure that provides management and deployment of DCCs through the Web, and whose goal is to support collaboration on the Web.
|
Shen, Rao;
Vemuri, Naga Srinivas;
Fan, Weiguo;
Torres, Ricardo da S.;
Fox, Edward A.
Exploring digital libraries: integrating browsing, searching, and visualization (conference)
JCDL '06: Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries,
ACM Press,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Shen2006,
abstract = {Exploring services for digital libraries (DLs) include two major paradigms, browsing and searching, as well as other services such as clustering and visualization. In this paper, we formalize and generalize DL exploring services within a DL theory. We develop theorems to indicate that browsing and searching can be converted or mapped to each other under certain conditions. The theorems guide the design and implementation of exploring services for an integrated archaeological DL, ETANA-DL. Its integrated browsing and searching can support users in moving seamlessly between these operations, minimizing context switching, and keeping users focused. It also integrates browsing and searching into a single visual interface for DL exploration. A user study to evaluate ETANA-DL's exploring services helped validate our hypotheses.},
author = {Rao Shen and Naga Srinivas Vemuri and Weiguo Fan and Ricardo da S. Torres and Edward A. Fox},
booktitle = {JCDL '06: Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries},
date = {2006-06-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres06icdl.pdf},
pages = {1-10},
publisher = {ACM Press},
title = {Exploring digital libraries: integrating browsing, searching, and visualization},
year = {2006}
}
Exploring services for digital libraries (DLs) include two major paradigms, browsing and searching, as well as other services such as clustering and visualization. In this paper, we formalize and generalize DL exploring services within a DL theory. We develop theorems to indicate that browsing and searching can be converted or mapped to each other under certain conditions. The theorems guide the design and implementation of exploring services for an integrated archaeological DL, ETANA-DL. Its integrated browsing and searching can support users in moving seamlessly between these operations, minimizing context switching, and keeping users focused. It also integrates browsing and searching into a single visual interface for DL exploration. A user study to evaluate ETANA-DL's exploring services helped validate our hypotheses.
|
Bauzer-Medeiros, Claudia;
Carles, Olivier;
Devuyst, Florian;
Hugueney, Georges Hébrail and Bernard;
Joliveau, Marc;
Jomier, Geneviève;
Manouvrier, Maude;
Naïja, Yosr;
Scemama, Gérard;
Steffan., Laurent
Towards a data warehouse for urban traffic (in French) (article)
Revue des Nouvelles Technologies de L'Information,
B2,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Bauzer-Medeiros2006,
abstract = {Cet article présente la démarche multidisciplinaire que nous avons adoptée pour construire un système d’information pour l’aide à la décision dans la gestion du trafic routier. L’architecture du système, le schéma de l’entrepôt de données ainsi que les différentes représentations numériques et symboliques des séquences spatio-temporelles, stockées dans l’entrepôt, y sont détaillés.},
author = {Claudia Bauzer-Medeiros and Olivier Carles and Florian Devuyst and Georges Hébrail and
Bernard Hugueney and Marc Joliveau and Geneviève Jomier and Maude Manouvrier and Yosr Naïja and Gérard Scemama and Laurent Steffan.},
date = {2006-06-01},
journal = {Revue des Nouvelles Technologies de L'Information},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/caddyfinal1.pdf},
note = {Text in French). Published within 2eme journee francophone sur les Entrepots de Données et l'Analyse en ligne},
number = {B2},
pages = {119-137},
title = {Towards a data warehouse for urban traffic (in French)},
volume = {RNTI},
year = {2006}
}
Cet article présente la démarche multidisciplinaire que nous avons adoptée pour construire un système d’information pour l’aide à la décision dans la gestion du trafic routier. L’architecture du système, le schéma de l’entrepôt de données ainsi que les différentes représentations numériques et symboliques des séquences spatio-temporelles, stockées dans l’entrepôt, y sont détaillés.
|
Torres, Ricardo da Silva
Information Systems for Managing Image Collections: Applications and Research Challenges (in Portuguese) (conference)
Great Challenges of the Brazilian Computer Society,
2006.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{daTorres2006b,
author = {Ricardo da Silva Torres},
booktitle = {Great Challenges of the Brazilian Computer Society},
date = {2006-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/informationsystems06.pdf},
title = {Information Systems for Managing Image Collections: Applications and Research Challenges (in Portuguese)},
year = {2006}
}
|
Torres-Zenteno, A. H.;
Martins, E.;
Torres, Ricardo da Silva;
Cuaresma, M. J. E.
Teste de Desempenho em Aplicações SIG Web (Performance Tests in Web GIS Applications) (conference)
The Ibero-American Workshop on Requirements Engineering and Software Environments,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Torres-Zenteno2006,
abstract = {Este artigo propõe um modelo de processo de teste de desempenho para aplicações SIG Web. O modelo considera os casos de uso mais críticos ou de maior risco quanto ao desempenho de um sistema para a criação de cenários de testes. Além disso, prevê a utilização de ferramentas livres para automatização de etapas do processo de avaliação. O modelo foi aplicado ao projeto WebMaps, que é uma aplicação SIG Web cuja finalidade é auxiliar seus usuários no planejamento agrícola a partir de regiões de interesse. Os resultados preliminares obtidos indicam que os testes foram úteis na identificação de problemas da arquitetura preliminar do sistema.},
author = {A. H. Torres-Zenteno and E. Martins and Ricardo da Silva Torres and M. J. E. Cuaresma},
booktitle = {The Ibero-American Workshop on Requirements Engineering and Software Environments},
date = {2006-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/zenteno06ideas.pdf},
pages = {449—462},
title = {Teste de Desempenho em Aplicações SIG Web (Performance Tests in Web GIS Applications)},
year = {2006}
}
Este artigo propõe um modelo de processo de teste de desempenho para aplicações SIG Web. O modelo considera os casos de uso mais críticos ou de maior risco quanto ao desempenho de um sistema para a criação de cenários de testes. Além disso, prevê a utilização de ferramentas livres para automatização de etapas do processo de avaliação. O modelo foi aplicado ao projeto WebMaps, que é uma aplicação SIG Web cuja finalidade é auxiliar seus usuários no planejamento agrícola a partir de regiões de interesse. Os resultados preliminares obtidos indicam que os testes foram úteis na identificação de problemas da arquitetura preliminar do sistema.
|
Torres, Ricardo da Silva;
Medeiros, Claudia Bauzer;
Gonçalves, Marcos André;
Fox, Edward A.
A Digital Library Framework for Biodiversity Information Systems. (article)
International Journal on Digital Libraries,
1,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{daTorres2006,
abstract = {Biodiversity Information Systems (BISs) involve all kinds of heterogeneous data, which include ecological and geographical features. However, available information systems offer very limited support for managing these kinds of data in an integrated fashion. Furthermore, such systems do not fully support image content (e.g., photos of landscapes or living organisms) management, a requirement of many BIS end-users. In order to meet their needs, these users -- e.g., biologists, environmental experts -- often have to alternate between separate biodiversity and image information systems to combine information extracted from them. This hampers the addition of new data sources, as well as cooperation among scientists. The approach provided in this paper to meet these issues is based on taking advantage of advances in digital library innovations to integrate networked collections of heterogeneous data. It focuses on creating the basis for a next-generation BIS, combining new techniques of content-based image retrieval and database query processing mechanisms. This paper shows the use of this component-based architecture to support the creation of two tailored BIS systems dealing with fish specimen identification using search techniques. Experimental results suggest that this new approach improves the effectiveness of the fish identification process, when compared to the traditional key-based method.},
author = {Ricardo da Silva Torres and Claudia Bauzer Medeiros and Marcos André Gonçalves and Edward A. Fox},
date = {2006-02-01},
journal = {International Journal on Digital Libraries},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres06ijdl.pdf},
number = {1},
pages = {3-17},
title = {A Digital Library Framework for Biodiversity Information Systems.},
volume = {6},
year = {2006}
}
Biodiversity Information Systems (BISs) involve all kinds of heterogeneous data, which include ecological and geographical features. However, available information systems offer very limited support for managing these kinds of data in an integrated fashion. Furthermore, such systems do not fully support image content (e.g., photos of landscapes or living organisms) management, a requirement of many BIS end-users. In order to meet their needs, these users -- e.g., biologists, environmental experts -- often have to alternate between separate biodiversity and image information systems to combine information extracted from them. This hampers the addition of new data sources, as well as cooperation among scientists. The approach provided in this paper to meet these issues is based on taking advantage of advances in digital library innovations to integrate networked collections of heterogeneous data. It focuses on creating the basis for a next-generation BIS, combining new techniques of content-based image retrieval and database query processing mechanisms. This paper shows the use of this component-based architecture to support the creation of two tailored BIS systems dealing with fish specimen identification using search techniques. Experimental results suggest that this new approach improves the effectiveness of the fish identification process, when compared to the traditional key-based method.
|
Digiampietri, Luciano Antonio;
Alcazar, José de Jésus Pérez;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
A framework based on semantic Web services and AI planning for the management of bioinformatics scientific workflows (Technical Report)
Institute of Computing, University of Campinas (Unicamp),
Technical Report,
IC-06-004,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Digiampietri2006b,
abstract = {Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments or parts thereof. The resulting framework allows the integration of heterogeneous data and tools, and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.},
author = {Luciano Antonio Digiampietri and José de Jésus Pérez Alcazar and Claudia Bauzer Medeiros and Joao Carlos Setubal},
date = {2006-02-01},
institution = {Institute of Computing, University of Campinas (Unicamp)},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/digiampietri_report.pdf},
number = {IC-06-004},
title = {A framework based on semantic Web services and AI planning for the management of bioinformatics scientific workflows},
type = {Technical Report},
year = {2006}
}
Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments or parts thereof. The resulting framework allows the integration of heterogeneous data and tools, and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.
|
Adam, Randall Luis
Granulometric analysis of cell nuclei texture: Design of computational tools and application in biological models (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2006.
(
Abstract |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Adam2006,
abstract = {The chromatin texture of cell nuclei is a special concern for pathologists, because it can reflect metabolic changes, proliferative activity, nutritional state and cell differentiation. This work intends to enhance knowledge about image texture assessing granulometric features, to find in the literature different methods of residues extraction. Three groups of granulometric methods were implemented, such as: A) Classic granulometry, that operates with structuring elements; B) Granulometry by area closing or volume closing, that builds a component tree; C) Granulometry by H-basins closing, using geodesic reconstruction. The different methods were compared each other, using two image sets from biological models: Model I: Analysis of chromatin changes of cardiomyocytes during hystological development of 89 Wistar rats, between 19 days after conception and 60 days after birth. In order to obtain hematoxylin-stained cytologic preparations, the formalin-fixed samples were KOH-hydrolysed for 18 hours. Number of mitosis and granulometric parameters had a similar Spearman correlation coefficient with age, around -0.77. The chromatin texture becomes smoother with ageing, reflecting the progressive cell diferentiation of cardiomyocytes. Model II: Detection of chromatin texture differences between nuclei of three lung neoplasias and normal tracheobronchic cells. Cytologic brush smears, hematoxylin-eosin stained, collected during bronchoscopic exams of 117 patients, divided in 4 groups, were compared each other. The granulometry which uses structuring elements classified correctly 68.4% of cases. Among the residues-extraction techniques, the extraction of H-residues by geodesic reconstruction showed more significative results in the first biological model. Classic granulometric features had better performance to classify the image groups in the biologic model II. Residues-extraction by area opening or volume opening seemed to be fast and efficient methods. The granulometric residues are capable to provide useful information from chromatin texture, as demonstrated in nuclear changes during development of myocardium and between human lung cancers.},
author = {Randall Luis Adam},
date = {2006-02-01},
keyword = {PhDThesis},
school = {Instituto de Computação - Unicamp},
title = {Granulometric analysis of cell nuclei texture: Design of computational tools and application in biological models},
year = {2006}
}
The chromatin texture of cell nuclei is a special concern for pathologists, because it can reflect metabolic changes, proliferative activity, nutritional state and cell differentiation. This work intends to enhance knowledge about image texture assessing granulometric features, to find in the literature different methods of residues extraction. Three groups of granulometric methods were implemented, such as: A) Classic granulometry, that operates with structuring elements; B) Granulometry by area closing or volume closing, that builds a component tree; C) Granulometry by H-basins closing, using geodesic reconstruction. The different methods were compared each other, using two image sets from biological models: Model I: Analysis of chromatin changes of cardiomyocytes during hystological development of 89 Wistar rats, between 19 days after conception and 60 days after birth. In order to obtain hematoxylin-stained cytologic preparations, the formalin-fixed samples were KOH-hydrolysed for 18 hours. Number of mitosis and granulometric parameters had a similar Spearman correlation coefficient with age, around -0.77. The chromatin texture becomes smoother with ageing, reflecting the progressive cell diferentiation of cardiomyocytes. Model II: Detection of chromatin texture differences between nuclei of three lung neoplasias and normal tracheobronchic cells. Cytologic brush smears, hematoxylin-eosin stained, collected during bronchoscopic exams of 117 patients, divided in 4 groups, were compared each other. The granulometry which uses structuring elements classified correctly 68.4% of cases. Among the residues-extraction techniques, the extraction of H-residues by geodesic reconstruction showed more significative results in the first biological model. Classic granulometric features had better performance to classify the image groups in the biologic model II. Residues-extraction by area opening or volume opening seemed to be fast and efficient methods. The granulometric residues are capable to provide useful information from chromatin texture, as demonstrated in nuclear changes during development of myocardium and between human lung cancers.
|
Schimiguel, Juliano;
Baranauskas, Maria Cecilia Calani;
Medeiros, Claudia Bauzer
Usabilidade de Aplicações SIG Web na Perspectiva do Usuário: um Estudo de Caso. (article)
Informatica Publica,
2006.
(
BibTeX |
Tags:
Article
)
@article{Schimiguel2006,
author = {Juliano Schimiguel and Maria Cecilia Calani Baranauskas and Claudia Bauzer Medeiros},
date = {2006-01-01},
journal = {Informatica Publica},
keyword = {Article},
pages = {7-22},
title = {Usabilidade de Aplicações SIG Web na Perspectiva do Usuário: um Estudo de Caso.},
volume = {8},
year = {2006}
}
|
Rocha, Lenaldo B.;
Adam, Randall L.;
Leite, Neucimar J.;
Metze, Konradin;
Rossi, Marcos A.
Biomineralization of polyanionic collagen-elastin matrices during cavarial bone repair (article)
Journal of Biomedical Materials Research,
Issue 2,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Rocha2006,
abstract = {The polyanionic collagen-elastin matrices (PCEMs) are osteoconductive scaffolds that present high biocompatibility and efficacy in the regeneration of bone defects. In this study, the objective was to determine if these matrices are directly mineralized during the osteogenesis process and their influence in the organization of the new bone extracellular matrix. Samples of three PCEMs, differing in their charge density, were implanted into critical-sized calvarial bone defects created in rats and evaluated from 3days up to 1 year after implantation. The implanted PCEMs were directly biomineralized by osteoblasts as shown by ultrastructural, histoenzymologic, and morphologic analysis. The removal of the implants occurred during the bone remodeling process. The organization of the new bone matrix was evaluated by image texture analysis determining the Shannonś entropy and the fractal dimension of digital images. The bone matrix complexity decreased as the osteogenesis progressed approaching the values obtained for the original bone structure. These results show that the PCEMs allow faster formation of new bone by direct biomineralization of its structure and skipping the biomaterial resorption phase.},
author = {Lenaldo B. Rocha and Randall L. Adam and Neucimar J. Leite and Konradin Metze and Marcos A. Rossi},
date = {2006-01-01},
journal = {Journal of Biomedical Materials Research},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/randall06.pdf},
note = {DOI 10.1002/jbm.a.30782},
number = {Issue 2},
pages = {237-245},
title = {Biomineralization of polyanionic collagen-elastin matrices during cavarial bone repair},
volume = {79A},
year = {2006}
}
The polyanionic collagen-elastin matrices (PCEMs) are osteoconductive scaffolds that present high biocompatibility and efficacy in the regeneration of bone defects. In this study, the objective was to determine if these matrices are directly mineralized during the osteogenesis process and their influence in the organization of the new bone extracellular matrix. Samples of three PCEMs, differing in their charge density, were implanted into critical-sized calvarial bone defects created in rats and evaluated from 3days up to 1 year after implantation. The implanted PCEMs were directly biomineralized by osteoblasts as shown by ultrastructural, histoenzymologic, and morphologic analysis. The removal of the implants occurred during the bone remodeling process. The organization of the new bone matrix was evaluated by image texture analysis determining the Shannonś entropy and the fractal dimension of digital images. The bone matrix complexity decreased as the osteogenesis progressed approaching the values obtained for the original bone structure. These results show that the PCEMs allow faster formation of new bone by direct biomineralization of its structure and skipping the biomaterial resorption phase.
|
Murthy, U.;
Torres, Ricardo da Silva;
Fox, Edward A.
A Superimposed Application for Enhanced Image Description and Retrieval. (conference)
European Conference on Digital Llibraries,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Murthy2006,
abstract = {In this demo proposal, we describe our prototype application, SIERRA, which combines text-based and content-based image retrieval and allows users to link together image content of varying document granularity with related data like annotations. To achieve this, we use the concept of superimposed information (SI), which enables users to (a) deal with information of varying granularity (sub-document to complete document), and (b) select or work with information elements at sub-document level while retaining the original context.},
author = {U. Murthy and Ricardo da Silva Torres and Edward A. Fox},
booktitle = {European Conference on Digital Llibraries},
date = {2006-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/murthy06ecdl.pdf},
title = {A Superimposed Application for Enhanced Image Description and Retrieval.},
year = {2006}
}
In this demo proposal, we describe our prototype application, SIERRA, which combines text-based and content-based image retrieval and allows users to link together image content of varying document granularity with related data like annotations. To achieve this, we use the concept of superimposed information (SI), which enables users to (a) deal with information of varying granularity (sub-document to complete document), and (b) select or work with information elements at sub-document level while retaining the original context.
|
Medeiros, C. B.;
Torres, R.;
Falcão, A.;
Lewinsohn, T.;
Pradro, P.;
Freitas, A.;
Jr, L. C. Gomes;
Daltio, J.;
Andaló, F. A.
WeBios - Web Service Multimodal Tools for Strategic Biodiversity Research. (in portuguese) (conference)
WeBios - Web Service Multimodal Tools for Strategic Biodiversity Research. (in portuguese).,
UPA, UNICAMP,
2006.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros2006,
address = {UPA, UNICAMP},
author = {C. B. Medeiros and R. Torres and A. Falcão and T. Lewinsohn and P. Pradro and A. Freitas and L. C. Gomes Jr and J. Daltio and F. A. Andaló},
booktitle = {WeBios - Web Service Multimodal Tools for Strategic Biodiversity Research. (in portuguese).},
date = {2006-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/UPA_Webios.pdf},
title = {WeBios - Web Service Multimodal Tools for Strategic Biodiversity Research. (in portuguese)},
year = {2006}
}
|
Martins, Felipe Camilo;
Medeiros, Claudia Bauzer
Implementation issues in the WebMAPS project (conference)
XIV Poster Conference of Undergraduate Projects,
UNICAMP,
2006.
(
BibTeX |
Tags:
Conference
)
@conference{Martins2006b,
address = {UNICAMP},
author = {Felipe Camilo Martins and Claudia Bauzer Medeiros},
booktitle = {XIV Poster Conference of Undergraduate Projects},
date = {2006-01-01},
keyword = {Conference},
title = {Implementation issues in the WebMAPS project},
year = {2006}
}
|
Barga, R. S.;
Digiampietri, Luciano Antonio
Automatic Generation of Workflow Provenance (conference)
Provenance and Annotation of Data International Provenance and Annotation Workshop (IPAW),
Springer,
2006.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Barga2006,
abstract = {While workflow is playing an increasingly important role in e-Science, current systems lack support for the collection of provenance data. We argue that workflow provenance data should be automatically generated by the enactment engine and managed over time by an underlying storage service. We briefly describe our layered model for workflow execution provenance, which allows navigation from the conceptual model of an experiment to instance data collected during a specific experiment run, and back.},
author = {R. S. Barga and Luciano Antonio Digiampietri},
booktitle = {Provenance and Annotation of Data International Provenance and Annotation Workshop (IPAW)},
date = {2006-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/IPAW2006.pdf},
note = {ISBN: 3-540-46302-X},
publisher = {Springer},
title = {Automatic Generation of Workflow Provenance},
volume = {4145},
year = {2006}
}
While workflow is playing an increasingly important role in e-Science, current systems lack support for the collection of provenance data. We argue that workflow provenance data should be automatically generated by the enactment engine and managed over time by an underlying storage service. We briefly describe our layered model for workflow execution provenance, which allows navigation from the conceptual model of an experiment to instance data collected during a specific experiment run, and back.
|
Auada, Mariam P.;
Adam, Randall L.;
Leite, Neucimar J.;
Puzzi, M. M.;
Cintra, Maria Letícia;
Rizzo, W. B.;
Metze, Konradin
Fourier-Based Texture analysis of lower epidermis in Sjögren-Larsson Syndrom. (article)
Nalytical and Quantitative Cytology and Histology,
4,
2006.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Auada2006,
abstract = {To investigate whether image analysis of routine hematoxylin-eosin (H-E) skin sections using fast Fourier transformation (FFT) could detect structural alterations in patients with Sjogren-Larsson syndrome (SLS) diagnosed by molecular biology. STUDY DESIGN: Skin punch biopsies of 9 patients with SLS and 17 healthy volunteers were obtained. Digital images of routine histologic sections were taken, and their gray scale luminance was analyzed by FFT. The inertia values were determined for different ranges of the spatial frequencies in the vertical and horizontal direction. To get an estimation of anisotropy, we calculated the resultant vector of the designated frequency ranges. RESULTS: In the prickle cell layer, SLS patients showed more intense amplitudes in spatial structures with periods between 1.2 and 3.6 microm in the vertical direction, which correlated in part with accentuated nuclei and nucleoli and perinucleolar halos in the H-E sections. In a linear discriminant analysis, the variables derived from the FFT images correctly discriminated 84.6% of the patients. Texture features derived from the gray level cooccurrence matrix were not able to separate the groups. CONCLUSION: Exploratory texture analysis by FFT was able to detect discrete alterations in the prickle cell layer in routine light microscopy slides of SLS patients. The structural changes identified by FFT may be related to abnormal cellular components associated with aberrant lipid metabolism.},
author = {Mariam P. Auada and Randall L. Adam and Neucimar J. Leite and M. M. Puzzi and Maria Letícia Cintra and W. B. Rizzo and Konradin Metze},
date = {2006-01-01},
journal = {Nalytical and Quantitative Cytology and Histology},
keyword = {Article},
number = {4},
pages = {219-27},
title = {Fourier-Based Texture analysis of lower epidermis in Sjögren-Larsson Syndrom.},
volume = {28},
year = {2006}
}
To investigate whether image analysis of routine hematoxylin-eosin (H-E) skin sections using fast Fourier transformation (FFT) could detect structural alterations in patients with Sjogren-Larsson syndrome (SLS) diagnosed by molecular biology. STUDY DESIGN: Skin punch biopsies of 9 patients with SLS and 17 healthy volunteers were obtained. Digital images of routine histologic sections were taken, and their gray scale luminance was analyzed by FFT. The inertia values were determined for different ranges of the spatial frequencies in the vertical and horizontal direction. To get an estimation of anisotropy, we calculated the resultant vector of the designated frequency ranges. RESULTS: In the prickle cell layer, SLS patients showed more intense amplitudes in spatial structures with periods between 1.2 and 3.6 microm in the vertical direction, which correlated in part with accentuated nuclei and nucleoli and perinucleolar halos in the H-E sections. In a linear discriminant analysis, the variables derived from the FFT images correctly discriminated 84.6% of the patients. Texture features derived from the gray level cooccurrence matrix were not able to separate the groups. CONCLUSION: Exploratory texture analysis by FFT was able to detect discrete alterations in the prickle cell layer in routine light microscopy slides of SLS patients. The structural changes identified by FFT may be related to abnormal cellular components associated with aberrant lipid metabolism.
|
Adam, Randall L.;
Silva, Rosana C.;
Pereira, Fernanda G.;
Leite, Neucimar J.;
Lorand-Metze, Irene;
Metze, Konradin
The fractal dimension of nuclear chromatin as a prognostic factor in acute precursor B lymphoblastic leukemia (article)
Cellular Oncology,
1-2,
2006.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Adam2006b,
abstract = {The fractal nature of the DNA arrangement has been postulated to be a common feature of all cell nuclei. We investigated the prognostic importance of the fractal dimension (FD) of chromatin in blasts of patients with acute precursor B lymphoblastic leukemia (B-ALL). In 28 patients, gray scale transformed pseudo-3D images of 100 nuclei (May-Grünwald-Giemsa stained bone marrow smears) were analyzed. FD was determined by the Minkowski-Bouligand method extended to three dimensions. Goodness-of-fit of FD was estimated by the R2 values in the log-log plots. Whereas FD presented no prognostic relevance, patients with higher R2 values showed a prolonged survival. White blood cell count (WBC), age and mean fluorescence intensity of CD45 (MFICD45) were all unfavorable prognostic factors in univariate analyses. In a multivariate Cox-regression, R2, WBC, and MFICD45, entered the final model, which showed to be stable in a bootstrap resampling study. Blasts with lower R2 values, equivalent to accentuated ̈coarseness̈ of the chromatin pattern, which may reflect profound changes of the DNA methylation, indicated a poor prognosis. In conclusion the goodness-of-fit of the Minkowski-Bouligand dimension of chromatin can be regarded as a new and biologically relevant prognostic factor for patients with B-ALL.},
author = {Randall L. Adam and Rosana C. Silva and Fernanda G. Pereira and Neucimar J. Leite and Irene Lorand-Metze and Konradin Metze},
date = {2006-01-01},
journal = {Cellular Oncology},
keyword = {Article},
number = {1-2},
pages = {55-59},
title = {The fractal dimension of nuclear chromatin as a prognostic factor in acute precursor B lymphoblastic leukemia},
volume = {28},
year = {2006}
}
The fractal nature of the DNA arrangement has been postulated to be a common feature of all cell nuclei. We investigated the prognostic importance of the fractal dimension (FD) of chromatin in blasts of patients with acute precursor B lymphoblastic leukemia (B-ALL). In 28 patients, gray scale transformed pseudo-3D images of 100 nuclei (May-Grünwald-Giemsa stained bone marrow smears) were analyzed. FD was determined by the Minkowski-Bouligand method extended to three dimensions. Goodness-of-fit of FD was estimated by the R2 values in the log-log plots. Whereas FD presented no prognostic relevance, patients with higher R2 values showed a prolonged survival. White blood cell count (WBC), age and mean fluorescence intensity of CD45 (MFICD45) were all unfavorable prognostic factors in univariate analyses. In a multivariate Cox-regression, R2, WBC, and MFICD45, entered the final model, which showed to be stable in a bootstrap resampling study. Blasts with lower R2 values, equivalent to accentuated ̈coarseness̈ of the chromatin pattern, which may reflect profound changes of the DNA methylation, indicated a poor prognosis. In conclusion the goodness-of-fit of the Minkowski-Bouligand dimension of chromatin can be regarded as a new and biologically relevant prognostic factor for patients with B-ALL.
|
2005 |
Degan, Joyce Otsuka Cortes
Corporate data integration: proposal of an architecture based on data services (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Degan2005,
abstract = {The need for data integration in enterprises dates back to several decades. However, it is still a pressing problem for most environments, since it is seen as a means to allow integration among customers, partners and suppliers. Besides needs that arise from fusion of companies, there is always the issue of legacy systems that result from distinct implementations in different technologies. The resulting scenario is a distributed set of files and databases, which are redundant heterogeneous and hard to manage. Data integration requires reliable mechanisms, as well as an integrated set of procedures to ensure consistency, security and control of corporate data. Off the shelf solutions still provide fragmented views of data integration. This work analyzes problems found in enterprises during data integration processes, taking all previously mentioned factors into consideration. It proposes an architecture to solve these problems. The solution combines research in databases, distributed systems, and Web services and systems.},
author = {Joyce Otsuka Cortes Degan},
date = {2005-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/degan.pdf},
school = {Instituto de Computação - Unicamp},
title = {Corporate data integration: proposal of an architecture based on data services},
year = {2005}
}
The need for data integration in enterprises dates back to several decades. However, it is still a pressing problem for most environments, since it is seen as a means to allow integration among customers, partners and suppliers. Besides needs that arise from fusion of companies, there is always the issue of legacy systems that result from distinct implementations in different technologies. The resulting scenario is a distributed set of files and databases, which are redundant heterogeneous and hard to manage. Data integration requires reliable mechanisms, as well as an integrated set of procedures to ensure consistency, security and control of corporate data. Off the shelf solutions still provide fragmented views of data integration. This work analyzes problems found in enterprises during data integration processes, taking all previously mentioned factors into consideration. It proposes an architecture to solve these problems. The solution combines research in databases, distributed systems, and Web services and systems.
|
Costa, G.G.L.;
Digiampietri, Luciano Antonio;
Ostroski, E.H.;
Setubal, Joao Carlos
Evaluation of graph based protein clustering methods (conference)
Proceedings of the Fifth Brazilian Symposium on Mathematical and Computational Biology (BIOMAT2005),
Petropolis, RJ, Brazil,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Costa2005,
abstract = {Protein clustering is widely used in order to characterize functionally proteins. Many automatics methods for protein-clustering use a graph-based approach. In this work, we propose a methodology for evaluation of the solution given by these methods.},
address = {Petropolis, RJ, Brazil},
author = {G.G.L. Costa and Luciano Antonio Digiampietri and E.H. Ostroski and Joao Carlos Setubal},
booktitle = {Proceedings of the Fifth Brazilian Symposium on Mathematical and Computational Biology (BIOMAT2005)},
date = {2005-12-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/biomatv9.pdf},
title = {Evaluation of graph based protein clustering methods},
year = {2005}
}
Protein clustering is widely used in order to characterize functionally proteins. Many automatics methods for protein-clustering use a graph-based approach. In this work, we propose a methodology for evaluation of the solution given by these methods.
|
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier;
Gonçalves, Marcos André;
Zhang, Baoping;
Fan, Weiguo;
Fox, Edward A.;
Calado, Pavel
A New Framework to Combine Descriptors for Content-based Image Retrieval. (conference)
Proceedings of the Fourteenth Conference on Information and Knowledge Management (CIKM05),
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{daTorres2005,
abstract = {In this paper, we propose a novel framework using Genetic Programming to combine image database descriptors for content-based image retrieval (CBIR). Our framework is validated through several experiments involving two image databases and specific domains, where the images are retrieved based on the shape of their objects.},
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão and Marcos André Gonçalves and Baoping Zhang and Weiguo Fan and Edward A. Fox and Pavel Calado},
booktitle = {Proceedings of the Fourteenth Conference on Information and Knowledge Management (CIKM05)},
date = {2005-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres05cikm.pdf},
note = {Bremen, Germany},
pages = {335-336},
title = {A New Framework to Combine Descriptors for Content-based Image Retrieval.},
year = {2005}
}
In this paper, we propose a novel framework using Genetic Programming to combine image database descriptors for content-based image retrieval (CBIR). Our framework is validated through several experiments involving two image databases and specific domains, where the images are retrieved based on the shape of their objects.
|
Schimiguel, Juliano;
Baranauskas, Maria Cecília Calani;
Medeiros, Claudia Bauzer
Usabilidade de Aplicações SIG Web na Perspectiva do Usuário: um Estudo de Caso (Usability of WEB GIS Applications in the User Perspective: a Case Study) (conference)
Proc. VI Brazilian Symposium on GeoInformatics GEOINFO2005,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Schimiguel2005,
abstract = {Web GIS applications have received marked attention in the last years, since geographic information can be visualizated/manipulated in different places, for different profiles of users, using the Internet. This increases the complexity implementation of GIS applications, both with regard to functional aspects, and in computer-human interface aspects. The goal of this work is to illustrate the concept and techniques of usability in the context of interfaces for Web GIS applications, by means of a case study of usability test for these applications.},
author = {Juliano Schimiguel and Maria Cecília Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proc. VI Brazilian Symposium on GeoInformatics GEOINFO2005},
date = {2005-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/geoinfo05p44.pdf},
title = {Usabilidade de Aplicações SIG Web na Perspectiva do Usuário: um Estudo de Caso (Usability of WEB GIS Applications in the User Perspective: a Case Study)},
year = {2005}
}
Web GIS applications have received marked attention in the last years, since geographic information can be visualizated/manipulated in different places, for different profiles of users, using the Internet. This increases the complexity implementation of GIS applications, both with regard to functional aspects, and in computer-human interface aspects. The goal of this work is to illustrate the concept and techniques of usability in the context of interfaces for Web GIS applications, by means of a case study of usability test for these applications.
|
Digiampietri, Luciano Antonio;
Pérez-Alcázar, J.J.;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
Bioinformatics scientific workflows: combining databases, AI and Web services (conference)
International Workshop on Genomic Databases (IWGD´05),
Rio de Janeiro, RJ, Brazil,
2005.
(
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2005b,
address = {Rio de Janeiro, RJ, Brazil},
author = {Luciano Antonio Digiampietri and J.J. Pérez-Alcázar and Claudia Bauzer Medeiros and Joao Carlos Setubal},
booktitle = {International Workshop on Genomic Databases (IWGD´05)},
date = {2005-11-01},
keyword = {Conference},
title = {Bioinformatics scientific workflows: combining databases, AI and Web services},
year = {2005}
}
|
Schimiguel, Juliano;
Melo, Amanda Meincke;
Baranauskas, M. Cecillia C.;
Medeiros, Claudia Bauzer
Accessibility as a quality requirement: geographic information systems on the web (conference)
CLIHC '05: Proceedings of the 2005 Latin American conference on Human-computer interaction,
ACM Press.,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Schimiguel2005b,
abstract = {Web applications enable users with different profiles and necessities to access information from diversified locations and with different access tools. Besides the aspects that have already been discussed in works from the Software Quality domain, the accessibility to information and the Internet flexibility have been considered more and more important. Thus, considering accessibility as an important quality attribute for Web applications, in this paper we investigate the subject considering the context of Geographic Information Systems on the Web. Preliminary results of accessibility evaluation on some WebGIS applications show that this domain presents several challenges to be coped with in the design of their user interfaces.},
author = {Juliano Schimiguel and Amanda Meincke Melo and M. Cecillia C. Baranauskas and Claudia Bauzer Medeiros},
booktitle = {CLIHC '05: Proceedings of the 2005 Latin American conference on Human-computer interaction},
date = {2005-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CLIHC05.pdf},
pages = {8-19},
publisher = {ACM Press.},
title = {Accessibility as a quality requirement: geographic information systems on the web},
year = {2005}
}
Web applications enable users with different profiles and necessities to access information from diversified locations and with different access tools. Besides the aspects that have already been discussed in works from the Software Quality domain, the accessibility to information and the Internet flexibility have been considered more and more important. Thus, considering accessibility as an important quality attribute for Web applications, in this paper we investigate the subject considering the context of Geographic Information Systems on the Web. Preliminary results of accessibility evaluation on some WebGIS applications show that this domain presents several challenges to be coped with in the design of their user interfaces.
|
Miranda, Paulo A. V.;
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier
TSD: A Shape Descriptor based on a Distribution of Tensor Scale Local Orientation. (conference)
XVIII Brazilian Symposium on Computer Graphics and Image Processing,
Natal, RN, Brazil,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Miranda2005,
abstract = {We present tensor scale descriptor (TSD)--- a shape descriptor for content-based image retrieval, registration, and analysis. TSD exploits the notion of local structure thickness, orientation, and anisotropy as represented by the largest ellipse centered at each image pixel and within the same homogeneous region. The proposed method uses the normalized histogram of the local orientation (the angle of the ellipse) at regions of high anisotropy and thickness within a certain interval. It is shown that TSD is invariant to rotation and to some reasonable level of scale changes. Experimental results with a fish database are presented to illustrate and validate the method.},
address = {Natal, RN, Brazil},
author = {Paulo A. V. Miranda and Ricardo da Silva Torres and Alexandre Xavier Falcão},
booktitle = {XVIII Brazilian Symposium on Computer Graphics and Image Processing},
date = {2005-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/miranda05sibgrapi.pdf},
pages = {139 - 146},
title = {TSD: A Shape Descriptor based on a Distribution of Tensor Scale Local Orientation.},
year = {2005}
}
We present tensor scale descriptor (TSD)--- a shape descriptor for content-based image retrieval, registration, and analysis. TSD exploits the notion of local structure thickness, orientation, and anisotropy as represented by the largest ellipse centered at each image pixel and within the same homogeneous region. The proposed method uses the normalized histogram of the local orientation (the angle of the ellipse) at regions of high anisotropy and thickness within a certain interval. It is shown that TSD is invariant to rotation and to some reasonable level of scale changes. Experimental results with a fish database are presented to illustrate and validate the method.
|
Freitas, Ricardo Batista;
Torres, Ricardo da Silva
OntoSAIA: An Ontology-based Tool for Image Retrieval and Semi-Automatic Annotation (in Portuguese). (conference)
I Workshop in Digital Libraries, Proc. XX Brazilian Symposium on Databases - SBBD 2005,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Freitas2005,
abstract = {This article proposes the use of image content, keywords and ontologies to improve the image annotation and retrieval processes through the enhancement of the user’s knowledge of an image database. It proposes an architecture of a flexible system capable of dealing with multiple ontologies and multiple image content descriptors to help these tasks. The validation of the idea is being done through the implementation, in Java, of the software OntoSAIA.},
author = {Ricardo Batista Freitas and Ricardo da Silva Torres},
booktitle = {I Workshop in Digital Libraries, Proc. XX Brazilian Symposium on Databases - SBBD 2005},
date = {2005-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/freitas05wdl.pdf},
pages = {60-79},
title = {OntoSAIA: An Ontology-based Tool for Image Retrieval and Semi-Automatic Annotation (in Portuguese).},
year = {2005}
}
This article proposes the use of image content, keywords and ontologies to improve the image annotation and retrieval processes through the enhancement of the user’s knowledge of an image database. It proposes an architecture of a flexible system capable of dealing with multiple ontologies and multiple image content descriptors to help these tasks. The validation of the idea is being done through the implementation, in Java, of the software OntoSAIA.
|
Fileto, Renato;
Medeiros, Claudia Bauzer;
Pu, Calton;
Liu, Ling;
Assad, Eduardo Delgado
Building a Semantic Web System for Scientific Applications: An Engineering Approach (conference)
6th International Conference on Web Information Systems Engineering (WISE 05), published as Springer LNCS 3806,
Springer Berlin / Heidelberg,
New York, 2005,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Fileto2005,
abstract = {This paper presents an engineering experience for building a Semantic Web compliant system for a scientific application - agricultural zoning. First, we define the concept of ontological cover and a set of relationships between such covers. These definitions, based on domain ontologies, can be used, for example, to support the discovery of services on the Web. Second, we propose a semantic acyclic restriction on ontologies which enables the efficient comparison of ontological covers. Third, we present different engineering solutions to build ontology views satisfying the acyclic restriction in a prototype. Our experimental results unveil some limitations of the current Semantic Web technology to handle large data volumes, and show that the combination of such technology with traditional data. management techniques is an effective way to achieve highly functional and scalable solutions.},
address = {New York, 2005},
author = {Renato Fileto and Claudia Bauzer Medeiros and Calton Pu and Ling Liu and Eduardo Delgado Assad},
booktitle = {6th International Conference on Web Information Systems Engineering (WISE 05), published as Springer LNCS 3806},
date = {2005-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/wise2005.pdf},
pages = {633-642},
publisher = {Springer Berlin / Heidelberg},
title = {Building a Semantic Web System for Scientific Applications: An Engineering Approach},
volume = {3806},
year = {2005}
}
This paper presents an engineering experience for building a Semantic Web compliant system for a scientific application - agricultural zoning. First, we define the concept of ontological cover and a set of relationships between such covers. These definitions, based on domain ontologies, can be used, for example, to support the discovery of services on the Web. Second, we propose a semantic acyclic restriction on ontologies which enables the efficient comparison of ontological covers. Third, we present different engineering solutions to build ontology views satisfying the acyclic restriction in a prototype. Our experimental results unveil some limitations of the current Semantic Web technology to handle large data volumes, and show that the combination of such technology with traditional data. management techniques is an effective way to achieve highly functional and scalable solutions.
|
Carazzolle, M.F.;
Formighieri, E.F.;
Digiampietri, Luciano Antonio;
Araujo, M.R.R.;
Pereira, G.A.G.
GeneProjects: a Web application for ongoing annotation in EST and Shotgun genome projects (conference)
1st Internacional Conference of the Brazilian Association for Bioinformatics and Computational Biology (AB3C),
Caxambu, MG, Brazil,
2005.
(
BibTeX |
Tags:
Conference
)
@conference{Carazzolle2005,
address = {Caxambu, MG, Brazil},
author = {M.F. Carazzolle and E.F. Formighieri and Luciano Antonio Digiampietri and M.R.R. Araujo and G.A.G. Pereira},
booktitle = {1st Internacional Conference of the Brazilian Association for Bioinformatics and Computational Biology (AB3C)},
date = {2005-10-01},
keyword = {Conference},
title = {GeneProjects: a Web application for ongoing annotation in EST and Shotgun genome projects},
year = {2005}
}
|
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier;
Gonçalves, Marcos André;
Zhang, Baoping;
Fan, Weiguo;
Fox, Edward A.
A New Framework to Combine Descriptors for Content-Based Image Retrieval (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-05-21,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{daTorres2005b,
abstract = {Methods that combine image database descriptors have strong influence on the effectiveness of content-based image retrieval (CBIR) systems. Although there are many combination functions described in the image processing literature, empirical evaluation studies have shown that those functions do not perform consistently well across different contexts (queries, image collections, users). Moreover, it is often very difficult for human beings to identify optimal combination functions for a particular application. In this paper, we propose a novel framework using {\\em Genetic Programming} to combine image database descriptors for CBIR. Our framework is validated through several experiments involving two image databases and a specific domain, where the images are retrieved based on the shape of their objects.},
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão and Marcos André Gonçalves and Baoping Zhang and Weiguo Fan and Edward A. Fox},
date = {2005-09-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/05-21.pdf},
number = {IC-05-21},
title = {A New Framework to Combine Descriptors for Content-Based Image Retrieval},
type = {Technical Report},
year = {2005}
}
Methods that combine image database descriptors have strong influence on the effectiveness of content-based image retrieval (CBIR) systems. Although there are many combination functions described in the image processing literature, empirical evaluation studies have shown that those functions do not perform consistently well across different contexts (queries, image collections, users). Moreover, it is often very difficult for human beings to identify optimal combination functions for a particular application. In this paper, we propose a novel framework using {\em Genetic Programming} to combine image database descriptors for CBIR. Our framework is validated through several experiments involving two image databases and a specific domain, where the images are retrieved based on the shape of their objects.
|
Digiampietri, Luciano Antonio;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
A framework based in Web services orchestration for bioinformatics workflow management (article)
Genetics and Molecular Research [online],
3,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Digiampietri2005b,
abstract = {Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges: how to understand and organize these resources and how to provide interoperability among tools to achieve a given goal. We defined and implemented a framework to help meet some of these challenges. Four issues were considered: the use of Web services as a basic unit, the notion of a Semantic Web to improve interoperability at the syntactic and semantic levels, and the use of scientific workflows to coordinate services to be executed, including their interdependencies and service orchestration.},
author = {Luciano Antonio Digiampietri and Claudia Bauzer Medeiros and Joao Carlos Setubal},
date = {2005-09-01},
journal = {Genetics and Molecular Research [online]},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/digiampietri_gmr.pdf},
number = {3},
pages = {535--542},
title = {A framework based in Web services orchestration for bioinformatics workflow management},
volume = {4},
year = {2005}
}
Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges: how to understand and organize these resources and how to provide interoperability among tools to achieve a given goal. We defined and implemented a framework to help meet some of these challenges. Four issues were considered: the use of Web services as a basic unit, the notion of a Semantic Web to improve interoperability at the syntactic and semantic levels, and the use of scientific workflows to coordinate services to be executed, including their interdependencies and service orchestration.
|
Jr, Gilberto Zonta Pastorello;
Medeiros, Claudia Bauzer;
Resende, Silvania Maria de;
Rocha, Henrique Aparecido da
Interoperability for GIS Document Management in Environmental Planning (article)
Journal on Data Semantics,
LNCS3534,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Jr2005b,
abstract = {Environmental planning requires constant tracing and revision of activities. Planners must be provided with appropriate documentation tools to aid communication among them and support plan enactment, revision and evolution. Moreover, planners often work in distinct institutions, thus these supporting tools must interoperate in distributed environments and in a semantically coherent fashion. Since semantics are strongly related to use, documentation also enhances the ways in which users can cooperate. The emergence of the Semantic Web created the need for documenting Web data and processes, using specific standards. This paper addresses this problem, for two issues: (1) ways of documenting planning processes, in three different aspects: what was done, how it was done and why it was done that way; and (2) a framework that supports the management of those documents using Semantic Web standards.},
author = {Gilberto Zonta Pastorello Jr and Claudia Bauzer Medeiros and Silvania Maria de Resende and Henrique Aparecido da Rocha},
date = {2005-08-01},
journal = {Journal on Data Semantics},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/pastorellojr2005.pdf},
note = {DOI 10.1007/11496168_5},
number = {LNCS3534},
pages = {100-124},
title = {Interoperability for GIS Document Management in Environmental Planning},
volume = {3},
year = {2005}
}
Environmental planning requires constant tracing and revision of activities. Planners must be provided with appropriate documentation tools to aid communication among them and support plan enactment, revision and evolution. Moreover, planners often work in distinct institutions, thus these supporting tools must interoperate in distributed environments and in a semantically coherent fashion. Since semantics are strongly related to use, documentation also enhances the ways in which users can cooperate. The emergence of the Semantic Web created the need for documenting Web data and processes, using specific standards. This paper addresses this problem, for two issues: (1) ways of documenting planning processes, in three different aspects: what was done, how it was done and why it was done that way; and (2) a framework that supports the management of those documents using Semantic Web standards.
|
Digiampietri, Luciano Antonio;
Perdigueiro, Julia;
Junior, Aloisio de Almeida;
Faria, Daniel;
Ostroski, Eric;
Costa, Gustavo;
Perez, Marcelo Cunha
Fact and Task Oriented System for genome assembly and annotation (conference)
Proceedings of the Brazilian Symposium on Bioinformatics (BSB 2005),
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2005,
abstract = {We present a preliminary description and results of a system to help the curation of genome assembly and annotation. Standard tools are used for these tasks, and our methodology focuses on user guidance, data visualization and integration, and data browsing aspects.},
author = {Luciano Antonio Digiampietri and Julia Perdigueiro and Aloisio de Almeida Junior and Daniel Faria and Eric Ostroski and Gustavo Costa and Marcelo Cunha Perez},
booktitle = {Proceedings of the Brazilian Symposium on Bioinformatics (BSB 2005)},
date = {2005-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2005_BSB2005_LNBI_35940238.pdf},
note = {ISBN 3-540-28008-1},
pages = {238-241},
title = {Fact and Task Oriented System for genome assembly and annotation},
volume = {3594},
year = {2005}
}
We present a preliminary description and results of a system to help the curation of genome assembly and annotation. Standard tools are used for these tasks, and our methodology focuses on user guidance, data visualization and integration, and data browsing aspects.
|
Baranauskas, Maria Cecília Calani;
Schimiguel, Juliano;
Simoni, Carlos Alberto Cocozza;
Medeiros, Claudia Bauzer
Guiding the Process of Requirements Elicitation with a Semiotic Approach – A Case Study. (conference)
Proc. HCI International - HCI2005,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Baranauskas2005,
abstract = {Requirements Engineering (RE) is the process of discovering the purpose of a prospective software system, by identifying stakeholders and their needs, and documenting these in a form that is suitable to analysis, communication, and subsequent implementation. Requirements elicitation is closely related and even interleaved to other RE activities such as: modeling, analysis & negotiation, and communication of requirements. RE is a multidisciplinary and human-centered activity. This paper presents a participatory approach to requirements elicitation that deals with functional and non-functional requirements considering social, political, cultural and ethical issues involved in understanding the problem in the process of RE. The proposed approach is theoretically grounded in methods and models from Organizational Semiotics. The proposed approach is illustrated with a case study related to the development of an application of Geographical Information Systems in the Web (Web GIS). Results of the case study allowed us to observe the contribution of OS in the proposed approach, including elements to inform the user interface design of the system.},
author = {Maria Cecília Calani Baranauskas and Juliano Schimiguel and Carlos Alberto Cocozza Simoni and Claudia Bauzer Medeiros},
booktitle = {Proc. HCI International - HCI2005},
date = {2005-07-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/CHI2005.pdf},
pages = {100-110},
title = {Guiding the Process of Requirements Elicitation with a Semiotic Approach – A Case Study.},
year = {2005}
}
Requirements Engineering (RE) is the process of discovering the purpose of a prospective software system, by identifying stakeholders and their needs, and documenting these in a form that is suitable to analysis, communication, and subsequent implementation. Requirements elicitation is closely related and even interleaved to other RE activities such as: modeling, analysis & negotiation, and communication of requirements. RE is a multidisciplinary and human-centered activity. This paper presents a participatory approach to requirements elicitation that deals with functional and non-functional requirements considering social, political, cultural and ethical issues involved in understanding the problem in the process of RE. The proposed approach is theoretically grounded in methods and models from Organizational Semiotics. The proposed approach is illustrated with a case study related to the development of an application of Geographical Information Systems in the Web (Web GIS). Results of the case study allowed us to observe the contribution of OS in the proposed approach, including elements to inform the user interface design of the system.
|
Medeiros, Claudia Bauzer;
Crosta, Alvaro Penteado;
Lamparelli, Rubens Augusto;
Rocha, Jansle Vieira;
Filho, Carlos Roberto de Souza;
Jr., Jurandir Zullo
Remote Sensing Research at the State University of Campinas, Brazil (article)
IEEE Geoscience and Remote Sensing Newsletter,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Medeiros2005b,
abstract = {The State University of Campinas (UNICAMP) is one of Brazil’s foremost universities, being responsible for 15% of the country’s scientific publications. Every year, over 50,000 students from all over the country apply to enter one of the university’s 60 undergraduate courses, and only 10% meet the strict entrance examinations. Created over 35 years ago to be a research-oriented university, half of its 30,000 students are enrolled in graduate programs, and the University awards every year 1000 Masters and 700 PhD degrees. Still another 14,000 people are enrolled in continuing education courses. This student body is taught by 1800 faculty, 97% of which have PhD degrees. This profile of student and faculty qualification, allied to good research facilities, provide very good opportunities for innovative research, involving both graduate and undergraduate students. Remote sensing (RS) research, by nature multidisciplinary, has found in UNICAMP a good environment to flourish. Several laboratories conduct work on different aspects of the use of this technology, involving faculty with distinct profiles. Rather than one single center dedicated to RS aspects, several laboratories develop initiatives in this area, with distinct application domains in mind. This paper gives a brief overview of the work conducted along two distinct domains – agriculture and geology – with projects resulting from cooperation of experts in computer science and in the study and application of RS to these domains. It must be stressed that other groups in the university also conduct work involving remote sensing technology – e.g., for biodiversity analysis – but this paper presents a good sample of relevant ongoing projects. The authors work in four distinct laboratories, but collaborate in various research and training activities. As will be seen, a few of the projects described in the sections that follow involve people from all laboratories concerned.},
author = {Claudia Bauzer Medeiros and Alvaro Penteado Crosta and Rubens Augusto Lamparelli and Jansle Vieira Rocha and Carlos Roberto de Souza Filho and Jurandir Zullo Jr.},
date = {2005-06-01},
journal = {IEEE Geoscience and Remote Sensing Newsletter},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/grss05.pdf},
pages = {11-16},
title = {Remote Sensing Research at the State University of Campinas, Brazil},
year = {2005}
}
The State University of Campinas (UNICAMP) is one of Brazil’s foremost universities, being responsible for 15% of the country’s scientific publications. Every year, over 50,000 students from all over the country apply to enter one of the university’s 60 undergraduate courses, and only 10% meet the strict entrance examinations. Created over 35 years ago to be a research-oriented university, half of its 30,000 students are enrolled in graduate programs, and the University awards every year 1000 Masters and 700 PhD degrees. Still another 14,000 people are enrolled in continuing education courses. This student body is taught by 1800 faculty, 97% of which have PhD degrees. This profile of student and faculty qualification, allied to good research facilities, provide very good opportunities for innovative research, involving both graduate and undergraduate students. Remote sensing (RS) research, by nature multidisciplinary, has found in UNICAMP a good environment to flourish. Several laboratories conduct work on different aspects of the use of this technology, involving faculty with distinct profiles. Rather than one single center dedicated to RS aspects, several laboratories develop initiatives in this area, with distinct application domains in mind. This paper gives a brief overview of the work conducted along two distinct domains – agriculture and geology – with projects resulting from cooperation of experts in computer science and in the study and application of RS to these domains. It must be stressed that other groups in the university also conduct work involving remote sensing technology – e.g., for biodiversity analysis – but this paper presents a good sample of relevant ongoing projects. The authors work in four distinct laboratories, but collaborate in various research and training activities. As will be seen, a few of the projects described in the sections that follow involve people from all laboratories concerned.
|
Junior, Eduardo Tarciso Soares
Mechanisms to Speed up Foreign Trade Processes (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Junior2005,
abstract = {The dynamism of foreign trade, a consequence of the globalization of the economy, has resulted in considerable growht of goods exchanged all over the world. In order to cope with this demand, the Brazilian government has been continuously investing in the modernization of the installations, equipment and software to offer efficient, safe and faster services to people and enterprises involved in foreign trade. The Integrated System of Foreign Trade - SISCOMEX - was developed as part of this effort. This system, developed by the agencies that control foreign trade, is responsible for controlling and storing all related information regarding Importations and Exportations, as well as Special Customers Trade. The system facilitates the supervisory tasks of the IRS, thus speeding up importation and exportation of goods in ports and airports. The systemś DBMS is hosted by a mainframe platform in SERPRO. However, data communication is done via MDB files exchanged with companies which are connected to the system through links. Every company interested in foreign trade needs to interact with SISCOMEX. This interaction is complicated because companies and government do not always have database compatibility. The dissertation covers the problems of integration and communication between companies and SISCOMEX. The main contributions are: the analysis of the interaction among the participating information systems; a proposal to standardize the most common foreign trade processes, according to processing requirements and interface standards. The model is validated through a real life case study and prototype.},
author = {Eduardo Tarciso Soares Junior},
date = {2005-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/soares.pdf},
school = {Instituto de Computação - Unicamp},
title = {Mechanisms to Speed up Foreign Trade Processes},
year = {2005}
}
The dynamism of foreign trade, a consequence of the globalization of the economy, has resulted in considerable growht of goods exchanged all over the world. In order to cope with this demand, the Brazilian government has been continuously investing in the modernization of the installations, equipment and software to offer efficient, safe and faster services to people and enterprises involved in foreign trade. The Integrated System of Foreign Trade - SISCOMEX - was developed as part of this effort. This system, developed by the agencies that control foreign trade, is responsible for controlling and storing all related information regarding Importations and Exportations, as well as Special Customers Trade. The system facilitates the supervisory tasks of the IRS, thus speeding up importation and exportation of goods in ports and airports. The systemś DBMS is hosted by a mainframe platform in SERPRO. However, data communication is done via MDB files exchanged with companies which are connected to the system through links. Every company interested in foreign trade needs to interact with SISCOMEX. This interaction is complicated because companies and government do not always have database compatibility. The dissertation covers the problems of integration and communication between companies and SISCOMEX. The main contributions are: the analysis of the interaction among the participating information systems; a proposal to standardize the most common foreign trade processes, according to processing requirements and interface standards. The model is validated through a real life case study and prototype.
|
Jr, Gilberto Zonta Pastorello
Publication and Integration of Scientific Workflows on the Web (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Jr2005,
abstract = {Scientific activities involve complex multidisciplinary processes and demand cooperative work. This entails a series of open problems in supporting this work ranging from data and process management to appropriate user interfaces for softwares. This work contributes in providing solutions to some of these problems. It focuses on improving the documentation mechanisms of processes and making it possible to publish and integrate them on the Web. This eases the specification and execution of distributed processes on the Web as well as the reuse of these specifications. The work was based on Semantic Web standards aiming at interoperability and the use of scientific workflows for modeling processes and using them on the Web. The main contributions of this work are: (i) a data model, which takes Semantic Web standards into consideration, for representing scientific workflows and storing them in a database. The model induces a workflow specification method that favors reuse and integration of these specifications; (ii) a comparative analysis of standards proposals for representing workflows in XML; (iii) the proposal of a Web-centered architecture for the management of documents (mainly workflows); and, (iv) the partial implementation of this architecture. The work uses as a motivation the area of environmental planning as a means to elucidate requirements and validate the proposal.},
author = {Gilberto Zonta Pastorello Jr},
date = {2005-04-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Tese_Pastorello.pdf},
school = {Instituto de Computação - Unicamp},
title = {Publication and Integration of Scientific Workflows on the Web},
year = {2005}
}
Scientific activities involve complex multidisciplinary processes and demand cooperative work. This entails a series of open problems in supporting this work ranging from data and process management to appropriate user interfaces for softwares. This work contributes in providing solutions to some of these problems. It focuses on improving the documentation mechanisms of processes and making it possible to publish and integrate them on the Web. This eases the specification and execution of distributed processes on the Web as well as the reuse of these specifications. The work was based on Semantic Web standards aiming at interoperability and the use of scientific workflows for modeling processes and using them on the Web. The main contributions of this work are: (i) a data model, which takes Semantic Web standards into consideration, for representing scientific workflows and storing them in a database. The model induces a workflow specification method that favors reuse and integration of these specifications; (ii) a comparative analysis of standards proposals for representing workflows in XML; (iii) the proposal of a Web-centered architecture for the management of documents (mainly workflows); and, (iv) the partial implementation of this architecture. The work uses as a motivation the area of environmental planning as a means to elucidate requirements and validate the proposal.
|
Aragao, Paulo Sergio Sampaio de
GeoMarketing: Models and Systems, with Applications in Telecommunications (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deAragao2005,
abstract = {The goal of geomarketing is to manage and combine spatial and business data for decision support within the domain of marketing application. This concept is evolving to include other application domains. However, the models proposed are seldom properly implemented; moreover, existing systems are not extensible, and support only one specific simulation model. This thesis contributes to solving these issues. The main contributions are the following: (1) proposal of a conceptual architecture for a geomarketing information systyem, that takes into consideration new methods and technologies; (2) a comparative study of distinct types of spatial marketing models and techniques; (3) identification of some kinds of problems in the telephone/telecommunications domain that can profit from the use of such techniques; (4) validation of the architecture by means of a Web geomarketing prototype, VoroMarketing. This is a modular and extensible prototype, which supports the use of distinct spatial models; (5) configuration of the prototype to solve specific problems in the telecommunications domain.},
author = {Paulo Sergio Sampaio de Aragao},
date = {2005-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/aragao.pdf},
school = {Instituto de Computação - Unicamp},
title = {GeoMarketing: Models and Systems, with Applications in Telecommunications},
year = {2005}
}
The goal of geomarketing is to manage and combine spatial and business data for decision support within the domain of marketing application. This concept is evolving to include other application domains. However, the models proposed are seldom properly implemented; moreover, existing systems are not extensible, and support only one specific simulation model. This thesis contributes to solving these issues. The main contributions are the following: (1) proposal of a conceptual architecture for a geomarketing information systyem, that takes into consideration new methods and technologies; (2) a comparative study of distinct types of spatial marketing models and techniques; (3) identification of some kinds of problems in the telephone/telecommunications domain that can profit from the use of such techniques; (4) validation of the architecture by means of a Web geomarketing prototype, VoroMarketing. This is a modular and extensible prototype, which supports the use of distinct spatial models; (5) configuration of the prototype to solve specific problems in the telecommunications domain.
|
Torres, Ricardo da Silva;
Medeiros, Claudia Bauzer;
Falcão, Alexandre Xavier
An Environment for Managing Images and Spatial Data for Development of Biodiversity Applications. (article)
First place - XVIII Thesis Competition - XXV Conference of the Brazilian Computer Society,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@article{daTorres2005b,
abstract = {There is a wide range of environmental applications requiring sophisticated management of several kinds of data, including spatial data and images of living beings. However, available information systems offer very limited support for managing such data in an integrated manner. This thesis provides a solution to combine these query requirements, which takes advantage of current digital library technology to manage networked collections of heterogeneous data in an integrated fashion. The research contributes to solve problems of specification and implementation of biodiversity information systems that manage images of species, textual descriptions and spatial data in an integrated way.},
author = {Ricardo da Silva Torres and Claudia Bauzer Medeiros and Alexandre Xavier Falcão},
date = {2005-01-01},
journal = {First place - XVIII Thesis Competition - XXV Conference of the Brazilian Computer Society},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres05ctd.pdf},
title = {An Environment for Managing Images and Spatial Data for Development of Biodiversity Applications.},
year = {2005}
}
There is a wide range of environmental applications requiring sophisticated management of several kinds of data, including spatial data and images of living beings. However, available information systems offer very limited support for managing such data in an integrated manner. This thesis provides a solution to combine these query requirements, which takes advantage of current digital library technology to manage networked collections of heterogeneous data in an integrated fashion. The research contributes to solve problems of specification and implementation of biodiversity information systems that manage images of species, textual descriptions and spatial data in an integrated way.
|
Santanchè, André;
Medeiros, Claudia Bauzer
Self Describing Components: Searching for Digital Artifacts on the Web. (conference)
roc. XX Brazilian Symposium on Databases - SBBD 2005,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Santanche2005,
abstract = {The Semantic Web has opened new horizons in exploring Web functionality. One of the many challenges is to proactively support the reuse of digital artifacts stored in repositories all over the world. Our goal is to contribute towards this issue, proposing a mechanism for describing and discovering artifacts called Digital Content Components (DCCs). DCCs are self-contained stored entities that may comprise any digital content, such as pieces of software, multimedia or text. Their specification takes advantage of Semantic Web standards and ontologies, both of which are used in the discovery process. DCC construction and composition procedures naturally lend themselves to patternmatching and subsumption-based search. Thus, many existing methods for Web searching can be extended to look for reusable artifacts. We validate the proposal discussing its implementation for agro-environmental planning.},
author = {André Santanchè and Claudia Bauzer Medeiros},
booktitle = {roc. XX Brazilian Symposium on Databases - SBBD 2005},
date = {2005-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SBBD2005-SelfDescribing.pdf},
pages = {10-24},
title = {Self Describing Components: Searching for Digital Artifacts on the Web.},
year = {2005}
}
The Semantic Web has opened new horizons in exploring Web functionality. One of the many challenges is to proactively support the reuse of digital artifacts stored in repositories all over the world. Our goal is to contribute towards this issue, proposing a mechanism for describing and discovering artifacts called Digital Content Components (DCCs). DCCs are self-contained stored entities that may comprise any digital content, such as pieces of software, multimedia or text. Their specification takes advantage of Semantic Web standards and ontologies, both of which are used in the discovery process. DCC construction and composition procedures naturally lend themselves to patternmatching and subsumption-based search. Thus, many existing methods for Web searching can be extended to look for reusable artifacts. We validate the proposal discussing its implementation for agro-environmental planning.
|
Moreira, L. M.;
Souza, R. F. de;
Digiampietri, Luciano Antonio;
Silva, A. C. R. da;
Setubal, Joao Carlos
Comparative analyses of Xanthomonas and Xylella complete genomes (article)
OMICS,
1,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Moreira2005,
abstract = {Computational analyses of four bacterial genomes of the Xanthomonadaceae family reveal new unique genes that may be involved in adaptation, pathogenicity, and host specificity. The Xanthomonas genus presents 3636 unique genes distributed in 1470 families, while Xylella genus presents 1026 unique genes distributed in 375 families. Among Xanthomonas-specific genes, we highlight a large number of cell wall degrading enzymes, proteases, and iron receptors, a set of energy metabolism genes, second copy of the type II secretion system, type III secretion system, flagella and chemotactic machinery, and the xanthomonadin synthesis gene cluster. Important genes unique to the Xylella genus are an additional copy of a type IV pili gene cluster and the complete machinery of colicin V synthesis and secretion. Intersections of gene sets from both genera reveal a cluster of genes homologous to Salmonellaś SPI-7 island in Xanthomonas axonopodis pv citri and Xylella fastidiosa 9a5c, which might be involved in host specificity. Each genome also presents important unique genes, such as an HMS cluster, the kdgT gene, and O-antigen in Xanthomonas axonopodis pv citri; a number of avrBS genes and a distinct O-antigen in Xanthomonas campestris pv campestris, a type I restriction-modification system and a nickase gene in Xylella fastidiosa 9a5c, and a type II restriction-modification system and two genes related to peptidoglycan biosynthesis in Xylella fastidiosa temecula 1. All these differences imply a considerable number of gene gains and losses during the divergence of the four lineages, and are associated with structural genome modifications that may have a direct relation with the mode of transmission, adaptation to specific environments and pathogenicity of each organism.},
author = {L. M. Moreira and R. F. de Souza and Luciano Antonio Digiampietri and A. C. R. da Silva and Joao Carlos Setubal},
date = {2005-01-01},
journal = {OMICS},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/moreiraOMICS.pdf},
number = {1},
pages = {43-76},
title = {Comparative analyses of Xanthomonas and Xylella complete genomes},
volume = {9},
year = {2005}
}
Computational analyses of four bacterial genomes of the Xanthomonadaceae family reveal new unique genes that may be involved in adaptation, pathogenicity, and host specificity. The Xanthomonas genus presents 3636 unique genes distributed in 1470 families, while Xylella genus presents 1026 unique genes distributed in 375 families. Among Xanthomonas-specific genes, we highlight a large number of cell wall degrading enzymes, proteases, and iron receptors, a set of energy metabolism genes, second copy of the type II secretion system, type III secretion system, flagella and chemotactic machinery, and the xanthomonadin synthesis gene cluster. Important genes unique to the Xylella genus are an additional copy of a type IV pili gene cluster and the complete machinery of colicin V synthesis and secretion. Intersections of gene sets from both genera reveal a cluster of genes homologous to Salmonellaś SPI-7 island in Xanthomonas axonopodis pv citri and Xylella fastidiosa 9a5c, which might be involved in host specificity. Each genome also presents important unique genes, such as an HMS cluster, the kdgT gene, and O-antigen in Xanthomonas axonopodis pv citri; a number of avrBS genes and a distinct O-antigen in Xanthomonas campestris pv campestris, a type I restriction-modification system and a nickase gene in Xylella fastidiosa 9a5c, and a type II restriction-modification system and two genes related to peptidoglycan biosynthesis in Xylella fastidiosa temecula 1. All these differences imply a considerable number of gene gains and losses during the divergence of the four lineages, and are associated with structural genome modifications that may have a direct relation with the mode of transmission, adaptation to specific environments and pathogenicity of each organism.
|
Medeiros, Claudia Bauzer;
Pérez-Alcazar, José;
Digiampietri, Luciano;
Jr., Gilberto Zonta Pastorello;
Santanchè, André;
Torres, Ricardo da Silva;
Madeira, Edmundo;
Bacarin, Evandro
WOODSS and the Web: Annotating and Reusing Scientific Workflows (article)
SIGMOD Record,
3,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Medeiros2005b,
abstract = {This paper discusses ongoing research on scientific workflows at the Institute of Computing, University of Campinas (IC - UNICAMP) Brazil. Our projects with bio-scientists have led us to develop a scientific workflow infrastructure named WOODSS. This framework has two main objectives in mind: to help scientists to specify and annotate their models and experiments; and to document collaborative efforts in scientific activities. In both contexts, workflows are annotated and stored in a database. This annotated scientific workfloẅ database is treated as a repository of (sometimes incomplete) approaches to solving scientific problems. Thus, it serves two purposes: allows comparison of distinct solutions to a problem, and their designs; and provides reusable and executable building blocks to construct new scientific workflows, to meet specific needs. Annotations, moreover, allow further insight into methodology, success rates, underlying hypotheses and other issues in experimental activities. The many research challenges faced by us at the moment include: the extension of this framework to the Web, following Semantic Web standards; providing means of discovering workflow components on the Web for reuse; and taking advantage of planning in Artificial Intelligence to support composition mechanisms. This paper describes our efforts in these directions, tested over two domains agro-environmental planning and bioinformatics.},
author = {Claudia Bauzer Medeiros and José Pérez-Alcazar and Luciano Digiampietri and Gilberto Zonta Pastorello Jr. and André Santanchè and Ricardo da Silva Torres and Edmundo Madeira and Evandro Bacarin},
date = {2005-01-01},
journal = {SIGMOD Record},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p18-special-sw-section-3.pdf},
number = {3},
pages = {18-23},
title = {WOODSS and the Web: Annotating and Reusing Scientific Workflows},
volume = {34},
year = {2005}
}
This paper discusses ongoing research on scientific workflows at the Institute of Computing, University of Campinas (IC - UNICAMP) Brazil. Our projects with bio-scientists have led us to develop a scientific workflow infrastructure named WOODSS. This framework has two main objectives in mind: to help scientists to specify and annotate their models and experiments; and to document collaborative efforts in scientific activities. In both contexts, workflows are annotated and stored in a database. This annotated scientific workfloẅ database is treated as a repository of (sometimes incomplete) approaches to solving scientific problems. Thus, it serves two purposes: allows comparison of distinct solutions to a problem, and their designs; and provides reusable and executable building blocks to construct new scientific workflows, to meet specific needs. Annotations, moreover, allow further insight into methodology, success rates, underlying hypotheses and other issues in experimental activities. The many research challenges faced by us at the moment include: the extension of this framework to the Web, following Semantic Web standards; providing means of discovering workflow components on the Web for reuse; and taking advantage of planning in Artificial Intelligence to support composition mechanisms. This paper describes our efforts in these directions, tested over two domains agro-environmental planning and bioinformatics.
|
Medeiros, Claudia Bauzer
From Subject of Change to Agent of Change - Women and IT in Brazil (conference)
Proceedings of the international symposium on Women and ICT: creating global transformation,
ACM,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros2005,
abstract = {Brazil has one of South America's largest information technology (IT) communities. One hundred million people voted electronically for President and congress in 2004, and 97 percent of all income tax declarations are submitted via the Internet. Over 20,000 students graduate every year in computer science alone, and two of the federal government's four industrial priorities are related to IT --- software and semiconductors. Though women represent 60 percent of the country's college graduates, less than 5 percent choose Computer Science as a major. Programs to foster gender equality have little intersection with the national digital inclusion program. This paper points out actions that may be considered to allow Brazilian women to become full citizens of the information society. These actions concern formal and informal means of education, and on visibility and advocacy.},
author = {Claudia Bauzer Medeiros},
booktitle = {Proceedings of the international symposium on Women and ICT: creating global transformation},
date = {2005-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p15-medeiros.pdf},
pages = {15},
publisher = {ACM},
title = {From Subject of Change to Agent of Change - Women and IT in Brazil},
volume = {126},
year = {2005}
}
Brazil has one of South America's largest information technology (IT) communities. One hundred million people voted electronically for President and congress in 2004, and 97 percent of all income tax declarations are submitted via the Internet. Over 20,000 students graduate every year in computer science alone, and two of the federal government's four industrial priorities are related to IT --- software and semiconductors. Though women represent 60 percent of the country's college graduates, less than 5 percent choose Computer Science as a major. Programs to foster gender equality have little intersection with the national digital inclusion program. This paper points out actions that may be considered to allow Brazilian women to become full citizens of the information society. These actions concern formal and informal means of education, and on visibility and advocacy.
|
Kaster, Daniel V.;
Medeiros, Claudia Bauzer;
Rocha, Heloisa Vieira
Supporting Modeling and Problem Solving from Precedent Experiences: The Role of Workflows and Case-Based Reasoning (article)
Environmental Modeling and Software,
2005.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Kaster2005,
abstract = {Environmental planners take advantage of Spatial Decision Support Systems (SDSS) to deal with data and models for problem solving. However, these kinds of software usually provide generic models, which require considerable effort to be specialized to fit particular situations. This paper explores a solution which couples Case-Based Reasoning (CBR) to an existing SDSS, named WOODSS, to help planners to profit from others ́experiences. WOODSS is based on a Geographic Information System, and interactively documents planners ́modeling activities by means of scientific workflows, that are stored in a database. This paper described how CBR has been used as part of WOODSS ́retrieval and storage mechanisms, to identify similar models to reuse in new decision processes. This adds a new dimension to the functionality of available SDSS.},
author = {Daniel V. Kaster and Claudia Bauzer Medeiros and Heloisa Vieira Rocha},
date = {2005-01-01},
journal = {Environmental Modeling and Software},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/EMS20030035.pdf},
pages = {689-704},
title = {Supporting Modeling and Problem Solving from Precedent Experiences: The Role of Workflows and Case-Based Reasoning},
volume = {20},
year = {2005}
}
Environmental planners take advantage of Spatial Decision Support Systems (SDSS) to deal with data and models for problem solving. However, these kinds of software usually provide generic models, which require considerable effort to be specialized to fit particular situations. This paper explores a solution which couples Case-Based Reasoning (CBR) to an existing SDSS, named WOODSS, to help planners to profit from others ́experiences. WOODSS is based on a Geographic Information System, and interactively documents planners ́modeling activities by means of scientific workflows, that are stored in a database. This paper described how CBR has been used as part of WOODSS ́retrieval and storage mechanisms, to identify similar models to reuse in new decision processes. This adds a new dimension to the functionality of available SDSS.
|
Borges, Karla;
Jr, Clodoveu Davis;
Silva, Altrigran;
Laender, Alberto;
Medeiros, Claudia Bauzer;
Carvalho, Joyce C. P.
Integrating Web Data and Geographic Knowledge into Spatial Databases (conference)
Chapter II in the book: Spatial Databases: Technologies, Techniques and Trends,
Idea Group Publishing Inc,
Journal on Data Semantics,
2005.
(
Abstract |
BibTeX |
Tags:
Article, Conference
)
@conference{Borges2005,
abstract = {Spatial database systems has been an active area of research over the past 20 years. A large number of research efforts have appeared in literature aimed at effective modelling of spatial data and efficient processing of spatial queries. This book investigates several aspects of a spatial database system, and includes recent research efforts in this field. More specifically, some of the topics covered are: spatial data modelling; indexing of spatial and spatio-temporal objects; data mining and knowledge discovery in spatial and spatio-temporal databases; management issues; and query processing for moving objects. Therefore, the reader will be able to get in touch with several important issues that the research community is dealing with. Moreover, each chapter is self-contained, and it is easy for the non-specialist to grasp the main issues. The authors of the book’s chapters are well-known researchers in spatial databases, and have offered significant contributions to spatial database literature. The chapters of this book provide an in-depth study of current technologies, techniques and trends in spatial and spatio-temporal database systems research. Each chapter has been carefully prepared by the contributing authors, in order to conform with the book’s requirements.},
author = {Karla Borges and Clodoveu Davis Jr and Altrigran Silva and Alberto Laender and Claudia Bauzer Medeiros and Joyce C. P. Carvalho},
booktitle = {Chapter II in the book: Spatial Databases: Technologies, Techniques and Trends},
date = {2005-01-01},
journal = {Journal on Data Semantics},
keyword = {Article, Conference},
note = {Editors Y. Manolopoulos, A. Papadopoulos and M. Vassilakopoulos},
pages = {23-47},
publisher = {Idea Group Publishing Inc},
title = {Integrating Web Data and Geographic Knowledge into Spatial Databases},
year = {2005}
}
Spatial database systems has been an active area of research over the past 20 years. A large number of research efforts have appeared in literature aimed at effective modelling of spatial data and efficient processing of spatial queries. This book investigates several aspects of a spatial database system, and includes recent research efforts in this field. More specifically, some of the topics covered are: spatial data modelling; indexing of spatial and spatio-temporal objects; data mining and knowledge discovery in spatial and spatio-temporal databases; management issues; and query processing for moving objects. Therefore, the reader will be able to get in touch with several important issues that the research community is dealing with. Moreover, each chapter is self-contained, and it is easy for the non-specialist to grasp the main issues. The authors of the book’s chapters are well-known researchers in spatial databases, and have offered significant contributions to spatial database literature. The chapters of this book provide an in-depth study of current technologies, techniques and trends in spatial and spatio-temporal database systems research. Each chapter has been carefully prepared by the contributing authors, in order to conform with the book’s requirements.
|
2004 |
Santanchè, André;
Medeiros, Claudia Bauzer
Geographic Digital Content Components. (conference)
Proc. VI Brazilian Symposium on GeoInformatics - GeoInfo 2004,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Santanche2004b,
abstract = {Projects using geographic information tools involve a large variety of data objects, represented in different formats. Many efforts pursue standards to represent each kind of data object, and the interoperability between geographic information tools. The proliferation of data and tools raises the need for their reuse. This need can be extended to project reuse. This work presents a proposal to reuse geographic information projects based on a model called digital content component. This model can represent all elements involved in a project – including software components – and their relationship in a open homogeneous format.},
author = {André Santanchè and Claudia Bauzer Medeiros},
booktitle = {Proc. VI Brazilian Symposium on GeoInformatics - GeoInfo 2004},
date = {2004-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/GeoInfo2004-GDCC.pdf},
title = {Geographic Digital Content Components.},
year = {2004}
}
Projects using geographic information tools involve a large variety of data objects, represented in different formats. Many efforts pursue standards to represent each kind of data object, and the interoperability between geographic information tools. The proliferation of data and tools raises the need for their reuse. This need can be extended to project reuse. This work presents a proposal to reuse geographic information projects based on a model called digital content component. This model can represent all elements involved in a project – including software components – and their relationship in a open homogeneous format.
|
Santanchè, André;
Medeiros, Claudia Bauzer
Managing Dynamic Repositories for Digital Content Components. (conference)
Proc. 9th International Conference on Extending Database Technology - EDBT 2004 PhD Workshop,
Springer Berlin / Heidelberg,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Santanche2004,
abstract = {The Semantic Web pursues interoperability at syntactic and semantic levels, to face the proliferation of data files with different purposes and representation formats. One challenge is how to represent such data, to allow users and applications to easily find, use and combine them. The paper proposes an infrastructure to meet those goals. The basis of the proposal is the notion of digital content components that extends the Software Engineering software component. The infrastructure offers tools to combine and extend these components, upon user request, managing them within dynamic repositories. The infrastructure adopt XML and RDF standards to foster interoperability, composition, adaptation and documentation of content data. This work was motivated by reuse needs observed in two specific application domains: education and agro-environmental planning.},
author = {André Santanchè and Claudia Bauzer Medeiros},
booktitle = {Proc. 9th International Conference on Extending Database Technology - EDBT 2004 PhD Workshop},
date = {2004-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/EDBT2004.pdf},
note = {DOI 10.1007/b101218},
pages = {66-77},
publisher = {Springer Berlin / Heidelberg},
title = {Managing Dynamic Repositories for Digital Content Components.},
volume = {3268/2004},
year = {2004}
}
The Semantic Web pursues interoperability at syntactic and semantic levels, to face the proliferation of data files with different purposes and representation formats. One challenge is how to represent such data, to allow users and applications to easily find, use and combine them. The paper proposes an infrastructure to meet those goals. The basis of the proposal is the notion of digital content components that extends the Software Engineering software component. The infrastructure offers tools to combine and extend these components, upon user request, managing them within dynamic repositories. The infrastructure adopt XML and RDF standards to foster interoperability, composition, adaptation and documentation of content data. This work was motivated by reuse needs observed in two specific application domains: education and agro-environmental planning.
|
Schimiguel, Juliano;
Baranauskas, Maria Cecília Calani;
Medeiros, Claudia Bauzer
Inspecting User Interface Quality in Web GIS Applications. (conference)
Proc. VI Brazilian Symposium on GeoInformatics - GEOINFO2004,
Campos do Jordao, SP, Brazil,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Baranauskas2004b,
abstract = {Web GIS applications can be found in many domains. The quality of the interfaces of applications determines not only the usability of such applications, but the possibilities offered to their users. This work investigates aspects of interface quality for Web GIS applications. The approach adopts an inspection evaluation based on ISO 9241. Preliminary results show the effectiveness of such an approach to user interface evaluation as a complement to tests with users.},
address = {Campos do Jordao, SP, Brazil},
author = {Juliano Schimiguel and Maria Cecília Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proc. VI Brazilian Symposium on GeoInformatics - GEOINFO2004},
date = {2004-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/GEOINFO2004.pdf},
pages = {201-219},
title = {Inspecting User Interface Quality in Web GIS Applications.},
year = {2004}
}
Web GIS applications can be found in many domains. The quality of the interfaces of applications determines not only the usability of such applications, but the possibilities offered to their users. This work investigates aspects of interface quality for Web GIS applications. The approach adopts an inspection evaluation based on ISO 9241. Preliminary results show the effectiveness of such an approach to user interface evaluation as a complement to tests with users.
|
Torres, Ricardo da Silva;
Technical, Alexandre Xavier Falcão
Contour Salience Descriptors for Effective Image Retrieval and Analysis. (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-04-11,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{daTorres2004b,
abstract = {This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1,100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.},
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão Technical},
date = {2004-10-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/04-11.pdf},
number = {IC-04-11},
title = {Contour Salience Descriptors for Effective Image Retrieval and Analysis.},
type = {Technical Report},
year = {2004}
}
This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1,100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.
|
Torres, Ricardo da Silva
An Environment for Management of Images and Spatial Data for Development of Biodiversity Applications (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{daTorres2004,
abstract = {There is a wide range of environmental applications requiring sophisticated management of several kinds of data, including spatial data and images of living beings. However, available information systems offer very limited support for managing such data in an integrated manner. On the one hand, environmental applications based on Geographic Information Systems (GIS) allow spatially correlating geophysical data and information on living species. On the other hand, image information systems used by biologists provide management of photos of landscapes and/or animals, but without any kind of geographical referencing. This thesis provides a solution to combine these query requirements, which takes advantage of current digital library technology to manage networked collections of heterogeneous data in an integrated fashion. The research thus contributes to solve problems of specification and implementation of biodiversity information systems that manage images of species, textual descriptions and spatial data in an integrated way, under the digital library perspective. This solution provides biodiversity researchers with new querying options. The main contributions of this thesis are: (i) a generic architecture, based on digital library components, for managing heterogeneous data collections, to access biodiversity data sources (text, images, and spatial data); (ii) a proposal of new shape descriptors for supporting content-based image retrieval; (iii) a new digital library component, for content-based image search; (iv) adoption of distinct visual structures for exploring query results in an image database; and (v) partial validation of the architecture, through implementation of a prototype that uses fish-related data.},
author = {Ricardo da Silva Torres},
date = {2004-10-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres.pdf},
school = {Instituto de Computação - Unicamp},
title = {An Environment for Management of Images and Spatial Data for Development of Biodiversity Applications},
year = {2004}
}
There is a wide range of environmental applications requiring sophisticated management of several kinds of data, including spatial data and images of living beings. However, available information systems offer very limited support for managing such data in an integrated manner. On the one hand, environmental applications based on Geographic Information Systems (GIS) allow spatially correlating geophysical data and information on living species. On the other hand, image information systems used by biologists provide management of photos of landscapes and/or animals, but without any kind of geographical referencing. This thesis provides a solution to combine these query requirements, which takes advantage of current digital library technology to manage networked collections of heterogeneous data in an integrated fashion. The research thus contributes to solve problems of specification and implementation of biodiversity information systems that manage images of species, textual descriptions and spatial data in an integrated way, under the digital library perspective. This solution provides biodiversity researchers with new querying options. The main contributions of this thesis are: (i) a generic architecture, based on digital library components, for managing heterogeneous data collections, to access biodiversity data sources (text, images, and spatial data); (ii) a proposal of new shape descriptors for supporting content-based image retrieval; (iii) a new digital library component, for content-based image search; (iv) adoption of distinct visual structures for exploring query results in an image database; and (v) partial validation of the architecture, through implementation of a prototype that uses fish-related data.
|
Jr, Gilberto Zonta Pastorello;
Medeiros, Claudia Bauzer
Integration of Scientific Workflows on the Web (conference)
Proceedings of the III Workshop de Teses e Dissertações em Bancos de Dados -- 19º Simpósio Brasileiro de Bancos de Dados,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Jr2004,
abstract = {Scientists have traditionally shared data, experiments and research results. Now, they continue to do this via electronic networks and the Internet, but often without an appropriate framework. One possible approach to this problem is coordinating cooperation via scientific workflows on the Web. Our research contributes to these efforts in two directions: proposal of a model compliant with Web standards to store workflow components in databases and publish them on the Web; and development of a set of Web-based tools to specify, edit and compose workflow components.},
author = {Gilberto Zonta Pastorello Jr and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the III Workshop de Teses e Dissertações em Bancos de Dados -- 19º Simpósio Brasileiro de Bancos de Dados},
date = {2004-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SBBD2004-Workshop-Anais.pdf},
pages = {44-49},
title = {Integration of Scientific Workflows on the Web},
year = {2004}
}
Scientists have traditionally shared data, experiments and research results. Now, they continue to do this via electronic networks and the Internet, but often without an appropriate framework. One possible approach to this problem is coordinating cooperation via scientific workflows on the Web. Our research contributes to these efforts in two directions: proposal of a model compliant with Web standards to store workflow components in databases and publish them on the Web; and development of a set of Web-based tools to specify, edit and compose workflow components.
|
Digiampietri, Luciano Antonio;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
A framework based in Web services orchestration for bioinformatics workflow management (conference)
Proceedings of the Third Brazilian Workshop on Bioinformatics,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2004,
abstract = {Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges: how to understand and organize these resources and how to provide interoperability among tools to achieve a given goal. We defined and implemented a framework to help meet some of these challenges. Four issues were considered: the use of Web services as a basic unit, the notion of a Semantic Web to improve interoperability at the syntactic and semantic levels, and the use of scientific workflows to coordinate services to be executed, including their interdependencies and service orchestration.},
author = {Luciano Antonio Digiampietri and Claudia Bauzer Medeiros and Joao Carlos Setubal},
booktitle = {Proceedings of the Third Brazilian Workshop on Bioinformatics},
date = {2004-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2004_WOB2004.pdf},
title = {A framework based in Web services orchestration for bioinformatics workflow management},
year = {2004}
}
Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges: how to understand and organize these resources and how to provide interoperability among tools to achieve a given goal. We defined and implemented a framework to help meet some of these challenges. Four issues were considered: the use of Web services as a basic unit, the notion of a Semantic Web to improve interoperability at the syntactic and semantic levels, and the use of scientific workflows to coordinate services to be executed, including their interdependencies and service orchestration.
|
Schimiguel, Juliano;
Baranauskas, Maria Cecília Calani;
Medeiros, Claudia Bauzer
Investigando Aspectos de Interação em Aplicações SIG na Web voltadas ao Domínio Agrícola (Investigating Interaction Aspects in WEB GIS Applications for Agriculture) (conference)
Proc. VI Simpósio Sobre Fatores Humanos em Sistemas Computacionais - IHC2004,
Curitiba, PR, Brazil,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Baranauskas2004,
abstract = {Geographical Information Systems (GIS) allow the manipulation, management and visualization of georeferenced data. The interest GIS applications has increased in the last years. Currently, Web GIS applications make available through the Internet geographic information dispersed in different places. There are several categories of GIS applications, in different scales and application domains, ranging from urban application to environmental problems. The importance of Web GIS for the agricultural domain comes from the fact that they function as a useful tool for users who work direct or indirectly in the agricultural domain: agriculturists, cooperatives, government instances. Considering the strategic value of these systems and the wide range of different prospective users, this work presents a survey of Web GIS applications with emphasis in the agricultural domain, and investigates user-system interaction aspects in these applications.},
address = {Curitiba, PR, Brazil},
author = {Juliano Schimiguel and Maria Cecília Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proc. VI Simpósio Sobre Fatores Humanos em Sistemas Computacionais - IHC2004},
date = {2004-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/IHC2004.pdf},
pages = {113-121},
title = {Investigando Aspectos de Interação em Aplicações SIG na Web voltadas ao Domínio Agrícola (Investigating Interaction Aspects in WEB GIS Applications for Agriculture)},
year = {2004}
}
Geographical Information Systems (GIS) allow the manipulation, management and visualization of georeferenced data. The interest GIS applications has increased in the last years. Currently, Web GIS applications make available through the Internet geographic information dispersed in different places. There are several categories of GIS applications, in different scales and application domains, ranging from urban application to environmental problems. The importance of Web GIS for the agricultural domain comes from the fact that they function as a useful tool for users who work direct or indirectly in the agricultural domain: agriculturists, cooperatives, government instances. Considering the strategic value of these systems and the wide range of different prospective users, this work presents a survey of Web GIS applications with emphasis in the agricultural domain, and investigates user-system interaction aspects in these applications.
|
Bacarin, Evandro;
Medeiros, Claudia Bauzer;
Madeira, Edmundo
A Collaborative Model for Agricultural Supply Chains. (conference)
Proc. OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2004 - LNCS 3290,
Springer Berlin / Heidelberg,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Bacarin2004,
abstract = {This paper presents a collaborative model for agricultural supply chains that supports negotiation, renegotiation, coordination and documentation mechanisms, adapted to situations found in this kind of supply chain – such as return flows and composite regulations. This model comprises basic building blocks and elements to support a chainrsquos dynamic execution. The model is supported by an architecture where chain elements are mapped to Web Services and their dynamics to service orchestration. Model and architecture are motivated by a real case study, for dairy supply chains.},
author = {Evandro Bacarin and Claudia Bauzer Medeiros and Edmundo Madeira},
booktitle = {Proc. OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2004 - LNCS 3290},
date = {2004-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/coopis2004.pdf},
note = {DOI 10.1007/b102173},
pages = {319-336},
publisher = {Springer Berlin / Heidelberg},
title = {A Collaborative Model for Agricultural Supply Chains.},
volume = {3290},
year = {2004}
}
This paper presents a collaborative model for agricultural supply chains that supports negotiation, renegotiation, coordination and documentation mechanisms, adapted to situations found in this kind of supply chain – such as return flows and composite regulations. This model comprises basic building blocks and elements to support a chainrsquos dynamic execution. The model is supported by an architecture where chain elements are mapped to Web Services and their dynamics to service orchestration. Model and architecture are motivated by a real case study, for dairy supply chains.
|
Yi, Bei
A Data Model for Moving Objects (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Yi2004,
abstract = {The dissemination of devices like GPS and wireless networks have enabled new applications that collect and analyze mobile objects. Traditional database systems do not support the management of mobile object data, since a great amount of information is continuously generated. Research in this area is recent, with relatively few works on mobile data management. This MSc thesis proposes an object-oriented moving object data model that incorporates two characteristics: it supports modeling of static, spatial, temporal, spatio-temporal and mobile objects in a homogeneous way; it specifies a set of basic operators and their algorithmic specification. These operators can be composed to obtain a wide variety of complex operators to query a mobile object database. Unlike most other proposals, this model supports not only 1D objects, but also those with 2D and 3D geometric descriptions. http://libdigi.unicamp.br/document/?did=13826.},
author = {Bei Yi},
date = {2004-07-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Yi.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Data Model for Moving Objects},
year = {2004}
}
The dissemination of devices like GPS and wireless networks have enabled new applications that collect and analyze mobile objects. Traditional database systems do not support the management of mobile object data, since a great amount of information is continuously generated. Research in this area is recent, with relatively few works on mobile data management. This MSc thesis proposes an object-oriented moving object data model that incorporates two characteristics: it supports modeling of static, spatial, temporal, spatio-temporal and mobile objects in a homogeneous way; it specifies a set of basic operators and their algorithmic specification. These operators can be composed to obtain a wide variety of complex operators to query a mobile object database. Unlike most other proposals, this model supports not only 1D objects, but also those with 2D and 3D geometric descriptions. http://libdigi.unicamp.br/document/?did=13826.
|
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier;
Costa, Luciano da Fontoura
A Graph-based Approach for Multiscale Shape Analysis. (article)
Pattern Recognition,
6,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{daTorres2004b,
abstract = {This paper presents two shape descriptors, multiscale fractal dimension and contour saliences, using a graph-based approach--- the image foresting transform. It introduces a robust approach to locate contour saliences from the relation between contour and skeleton. The contour salience descriptor consists of a vector, with salience location and value along the contour, and a matching algorithm. We compare both descriptors with fractal dimension, Fourier descriptors, moment invariants, Curvature Scale Space and Beam Angle Statistics regarding to their invariance to object characteristics that belong to a same class (compact-ability) and to their ability to separate objects of distinct classes (separability).},
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão and Luciano da Fontoura Costa},
date = {2004-06-01},
journal = {Pattern Recognition},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres04pr.pdf},
number = {6},
pages = {1163--1174},
title = {A Graph-based Approach for Multiscale Shape Analysis.},
volume = {37},
year = {2004}
}
This paper presents two shape descriptors, multiscale fractal dimension and contour saliences, using a graph-based approach--- the image foresting transform. It introduces a robust approach to locate contour saliences from the relation between contour and skeleton. The contour salience descriptor consists of a vector, with salience location and value along the contour, and a matching algorithm. We compare both descriptors with fractal dimension, Fourier descriptors, moment invariants, Curvature Scale Space and Beam Angle Statistics regarding to their invariance to object characteristics that belong to a same class (compact-ability) and to their ability to separate objects of distinct classes (separability).
|
Bimonte, José Antonio
Specification of a Public Bid System in eGovernment (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Bimonte2004,
abstract = {The perfectioning, reorganization and use of technology in the process of purchases of a public company, besides improving its internal control, propitiates the reduction of operational costs. This allows increasing the efficiency and transparency in the public bidding systems. This work presents a model of a Purchases Electronic System for public companies. This system uses the Internet as a means to support negotiation, within legal principles. The work also proposes a model to manage the supply process. This models supports integration of the proposed solution with G2B (Government to Business) electronic commerce, thereby contributing to understanding the purchase workflow within public companies in Brazil.},
author = {José Antonio Bimonte},
date = {2004-06-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/bimonte.pdf},
school = {Instituto de Computação - Unicamp},
title = {Specification of a Public Bid System in eGovernment},
year = {2004}
}
The perfectioning, reorganization and use of technology in the process of purchases of a public company, besides improving its internal control, propitiates the reduction of operational costs. This allows increasing the efficiency and transparency in the public bidding systems. This work presents a model of a Purchases Electronic System for public companies. This system uses the Internet as a means to support negotiation, within legal principles. The work also proposes a model to manage the supply process. This models supports integration of the proposed solution with G2B (Government to Business) electronic commerce, thereby contributing to understanding the purchase workflow within public companies in Brazil.
|
Metze, Konradin;
Adam, Randall Luis;
Leite, Neucimar J.
The sonification of cytologic images. (conference)
Cytometry Part A,
Wiley,
1,
2004.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Metze2004,
author = {Konradin Metze and Randall Luis Adam and Neucimar J. Leite},
booktitle = {Cytometry Part A},
date = {2004-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p12.pdf},
note = {ISAC XXII International Congress:Oral Presentation Abstracts},
number = {1},
pages = {85-85},
publisher = {Wiley},
title = {The sonification of cytologic images.},
volume = {59A},
year = {2004}
}
|
Adam, Randall Luis;
Ribeiro, Elisângela;
Metze, Konradin;
Leite, Neucimar J.;
Lorand-Metze, Irene
Morphometric and granulometric features of erythroblasts as a diagnostic tool of hematologic diseases. (conference)
Cytometry Part A,
Wiley,
1,
2004.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Adam2004b,
author = {Randall Luis Adam and Elisângela Ribeiro and Konradin Metze and Neucimar J. Leite and Irene Lorand-Metze},
booktitle = {Cytometry Part A},
date = {2004-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p20.pdf},
note = {ISAC XXII International Congress:Oral Presentation Abstracts},
number = {1},
pages = {46-46},
publisher = {Wiley},
title = {Morphometric and granulometric features of erythroblasts as a diagnostic tool of hematologic diseases.},
volume = {59A},
year = {2004}
}
|
Adam, Randall Luis;
Corsini, Tereza C. G.;
Silva, Patrícia Villalobos;
Cintra, Maria Letícia;
Leite, Neucimar J.;
Metze, Konradin
Fractal dimensions applied to thick contour detection and residues - Comparison of keloids and hypertrophic scars. (conference)
Cytometry Part A,
Wiley,
1,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Adam2004,
abstract = {SAC XXII International Congress:Oral Presentation Abstracts},
author = {Randall Luis Adam and Tereza C. G. Corsini and Patrícia Villalobos Silva and Maria Letícia Cintra and Neucimar J. Leite and Konradin Metze},
booktitle = {Cytometry Part A},
date = {2004-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/p37-38.pdf},
number = {1},
pages = {63-64},
publisher = {Wiley},
title = {Fractal dimensions applied to thick contour detection and residues - Comparison of keloids and hypertrophic scars.},
volume = {59A},
year = {2004}
}
SAC XXII International Congress:Oral Presentation Abstracts
|
Andrade, Daniel da Silva
Statistical significance tests and evaluation of a model of content image recovering (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{daAndrade2004,
abstract = {http://libdigi.unicamp.br/document/?did=11484},
author = {Daniel da Silva Andrade},
date = {2004-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/AndradeDanieldaSilva.pdf},
school = {Instituto de Computação - Unicamp},
title = {Statistical significance tests and evaluation of a model of content image recovering},
year = {2004}
}
http://libdigi.unicamp.br/document/?did=11484
|
Simões, Nielsen Cassiano
Detection of some abrupt transitions in video sequences (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Simoes2004,
abstract = {A digital video is represented by a sequence of images or frames. A video shot is an uninterrupted segment of screen time, space and graphical configurations. The problem of transition detection between shots can be seen as one of the most important step to the process of segmenting and parsing a digital video. In order to have an automatic detection of these events, some researches consider frame by frame comparison using dissimilarity measures based on color, form and texture information, while others apply images processing techniques over a representative image of the whole video. This work describes a new approach to detect transitions and abrupt effects (cuts and flashes) in image sequences, by considering simple and low computational cost algorithms which are defined based on patterns identification of a 1D signal. The results presented here show the good performance of the method in the identification of the corresponding events. http://libdigi.unicamp.br/document/?did=10165},
author = {Nielsen Cassiano Simões},
date = {2004-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SimoesNielsenCassiano.pdf},
school = {Instituto de Computação - Unicamp},
title = {Detection of some abrupt transitions in video sequences},
year = {2004}
}
A digital video is represented by a sequence of images or frames. A video shot is an uninterrupted segment of screen time, space and graphical configurations. The problem of transition detection between shots can be seen as one of the most important step to the process of segmenting and parsing a digital video. In order to have an automatic detection of these events, some researches consider frame by frame comparison using dissimilarity measures based on color, form and texture information, while others apply images processing techniques over a representative image of the whole video. This work describes a new approach to detect transitions and abrupt effects (cuts and flashes) in image sequences, by considering simple and low computational cost algorithms which are defined based on patterns identification of a 1D signal. The results presented here show the good performance of the method in the identification of the corresponding events. http://libdigi.unicamp.br/document/?did=10165
|
Torres, Ricardo da Silva;
Medeiros, Claudia Bauzer;
Gonçalves, Marcos André;
Fox, Edward A.
An OAI Compliant Content-Based Image Search Component (conference)
ACM-IEEE Joint Conference on Digital Libraries,
Tucson, AZ, USA,
2004.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{daTorres2004b,
address = {Tucson, AZ, USA},
author = {Ricardo da Silva Torres and Claudia Bauzer Medeiros and Marcos André Gonçalves and Edward A. Fox},
booktitle = {ACM-IEEE Joint Conference on Digital Libraries},
date = {2004-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres04jcdldemo.pdf},
note = {Demo},
pages = {418},
title = {An OAI Compliant Content-Based Image Search Component},
year = {2004}
}
|
Peerbocus, A.;
Medeiros, C. B.;
Voisard, A.;
Jomier, G.
A System for Change Documentation based on a Spatiotemporal Databas (article)
Geoinformatica,
2,
2004.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Peerbocus2004,
abstract = {The evolution of geographic phenomena has been one of the concerns of spatiotemporal database research. However, in a large spectrum of geographical applications, users need more than a mere representation of data evolution. For instance, in urban management applications - e.g. cadastral evolution - users often need to know why, how, and by whom certain changes have been performed as well as their possible impact on the environment. Answers to such queries are not possible unless supplementary information concerning real world events is associated with the corresponding changes in the database and is managed efficiently. This paper proposes a solution to this problem, which is based on extending a spatiotemporal database with a mechanism for managing documentation on the evolution of geographic information. This solution has been implemented in a GIS-based prototype, which is also discussed in the paper.},
author = {A. Peerbocus and C. B. Medeiros and A. Voisard and G. Jomier},
date = {2004-01-01},
journal = {Geoinformatica},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/peerbocusetal2004.pdf},
number = {2},
pages = {173-204},
title = {A System for Change Documentation based on a Spatiotemporal Databas},
volume = {8},
year = {2004}
}
The evolution of geographic phenomena has been one of the concerns of spatiotemporal database research. However, in a large spectrum of geographical applications, users need more than a mere representation of data evolution. For instance, in urban management applications - e.g. cadastral evolution - users often need to know why, how, and by whom certain changes have been performed as well as their possible impact on the environment. Answers to such queries are not possible unless supplementary information concerning real world events is associated with the corresponding changes in the database and is managed efficiently. This paper proposes a solution to this problem, which is based on extending a spatiotemporal database with a mechanism for managing documentation on the evolution of geographic information. This solution has been implemented in a GIS-based prototype, which is also discussed in the paper.
|
Nascimento, A.L.T.O.;
Verjovski-Almeida, S.;
Sluys, M.A. Van;
Monteiro-Vitorello, C.B.;
Camargo, L.E.A.;
Digiampietri, Luciano Antonio;
Harstkeerl, R.A.;
Ho, P.L.;
Marques, M.V.;
Oliveira, M.C.;
Setubal, Joao Carlos;
Haake, D.A.;
Martins, E.A.L.
Genome features of Leptospira interrogans serovar Copenhageni (article)
Brazilian Journal of Medical and Biological Research [online],
4,
2004.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Nascimento2004,
abstract = {We report novel features of the genome sequence of Leptospira interrogans serovar Copenhageni, a highly invasive spirochete. Leptospira species colonize a significant proportion of rodent populations worldwide and produce life-threatening infections in mammals. Genomic sequence analysis reveals the presence of a competent transport system with 13 families of genes encoding for major transporters including a three-member component efflux system compatible with the long-term survival of this organism. The leptospiral genome contains a broad array of genes encoding regulatory system, signal transduction and methyl-accepting chemotaxis proteins, reflecting the organismś ability to respond to diverse environmental stimuli. The identification of a complete set of genes encoding the enzymes for the cobalamin biosynthetic pathway and the novel coding genes related to lipopolysaccharide biosynthesis should bring new light to the study of Leptospira physiology. Genes related to toxins, lipoproteins and several surface-exposed proteins may facilitate a better understanding of the Leptospira pathogenesis and may serve as potential candidates for vaccine.},
author = {A.L.T.O. Nascimento and S. Verjovski-Almeida and M.A. Van Sluys and C.B. Monteiro-Vitorello and L.E.A. Camargo and Luciano Antonio Digiampietri and R.A. Harstkeerl and P.L. Ho and M.V. Marques and M.C. Oliveira and Joao Carlos Setubal and D.A. Haake and E.A.L. Martins},
date = {2004-01-01},
journal = {Brazilian Journal of Medical and Biological Research [online]},
keyword = {Article},
note = {ISSN 0100-879X},
number = {4},
pages = {459-477},
title = {Genome features of Leptospira interrogans serovar Copenhageni},
volume = {37},
year = {2004}
}
We report novel features of the genome sequence of Leptospira interrogans serovar Copenhageni, a highly invasive spirochete. Leptospira species colonize a significant proportion of rodent populations worldwide and produce life-threatening infections in mammals. Genomic sequence analysis reveals the presence of a competent transport system with 13 families of genes encoding for major transporters including a three-member component efflux system compatible with the long-term survival of this organism. The leptospiral genome contains a broad array of genes encoding regulatory system, signal transduction and methyl-accepting chemotaxis proteins, reflecting the organismś ability to respond to diverse environmental stimuli. The identification of a complete set of genes encoding the enzymes for the cobalamin biosynthetic pathway and the novel coding genes related to lipopolysaccharide biosynthesis should bring new light to the study of Leptospira physiology. Genes related to toxins, lipoproteins and several surface-exposed proteins may facilitate a better understanding of the Leptospira pathogenesis and may serve as potential candidates for vaccine.
|
2003 |
Melo, Tiago Eugenio de
Use and Application of Economic Models in Geomarketing Information Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deMelo2003,
abstract = {Survival in the business world depends on knowledge of oneś clients and competitors. A crucial factor in this competition is the ability to manage business data within a geographic context. The search for efficiency in decision making motivated the emergence of geomarketing, which combines marketing policies and strategies to information systems and the geographic location of the resources manipulated. This work aims to fill a gap in this nascent area, by combining results in economic models, computer science and geoprocessing. The main contributions of this dissertation are: a) a survey of the theoretical basis underlying information systems applied to geomarketing, both in economic modeling and computer science aspects; b) analysis of software engineering methodologies applied to the development of geomarketing applications; and c) implementation of a real life case study in geomarketing, adapting a specific economic model, and coupling it to a geographic information system.},
author = {Tiago Eugenio de Melo},
date = {2003-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/tiago_melo.pdf},
school = {Instituto de Computação - Unicamp},
title = {Use and Application of Economic Models in Geomarketing Information Systems},
year = {2003}
}
Survival in the business world depends on knowledge of oneś clients and competitors. A crucial factor in this competition is the ability to manage business data within a geographic context. The search for efficiency in decision making motivated the emergence of geomarketing, which combines marketing policies and strategies to information systems and the geographic location of the resources manipulated. This work aims to fill a gap in this nascent area, by combining results in economic models, computer science and geoprocessing. The main contributions of this dissertation are: a) a survey of the theoretical basis underlying information systems applied to geomarketing, both in economic modeling and computer science aspects; b) analysis of software engineering methodologies applied to the development of geomarketing applications; and c) implementation of a real life case study in geomarketing, adapting a specific economic model, and coupling it to a geographic information system.
|
Fileto, Renato;
Medeiros, Claudia Bauzer
A Survey on Information Systems Interoperability (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-03-30,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Fileto2003b,
abstract = {The interoperability of information systems has.},
author = {Renato Fileto and Claudia Bauzer Medeiros},
date = {2003-12-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/03-30.pdf},
number = {IC-03-30},
title = {A Survey on Information Systems Interoperability},
type = {Technical Report},
year = {2003}
}
The interoperability of information systems has.
|
Fileto, Renato
The POESIA Approach for the Integration of Data and Services in the Semantic Web (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Fileto2003,
abstract = {POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, including rules to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of data.},
author = {Renato Fileto},
date = {2003-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/renato_fileto.pdf},
school = {Instituto de Computação - Unicamp},
title = {The POESIA Approach for the Integration of Data and Services in the Semantic Web},
year = {2003}
}
POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, including rules to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of data.
|
Digiampietri, Luciano Antonio;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
A data model for comparative genomics. (article)
Revista Tecnologia da Informação,
2,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Digiampietri2003b,
abstract = {We present a simple data model that can be used as a building block in a comparative genomics information system for prokaryotic genomes. The model is extensible and flexible, and has as main entities the organism and the gene family. Existing systems tend to focus either on organisms or in gene families. We have applied the model to a set of eight bacterial genomes, and briefly describe the resulting system.},
author = {Luciano Antonio Digiampietri and Claudia Bauzer Medeiros and Joao Carlos Setubal},
date = {2003-12-01},
journal = {Revista Tecnologia da Informação},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2003_WOB20031.ps},
number = {2},
pages = {35-40},
title = {A data model for comparative genomics.},
volume = {3},
year = {2003}
}
We present a simple data model that can be used as a building block in a comparative genomics information system for prokaryotic genomes. The model is extensible and flexible, and has as main entities the organism and the gene family. Existing systems tend to focus either on organisms or in gene families. We have applied the model to a set of eight bacterial genomes, and briefly describe the resulting system.
|
Digiampietri, Luciano Antonio;
Medeiros, Claudia Bauzer;
Setubal, Joao Carlos
A data model for comparative genomics. (conference)
Proceedings of the Second Brazilian Workshop on Bioinformatics,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Digiampietri2003,
abstract = {We present a simple data model that can be used as a building block in a comparative genomics information system for prokaryotic genomes. The model is extensible and flexible, and has as main entities the organism and the gene family. Existing systems tend to focus either on organisms or in gene families. We have applied the model to a set of eight bacterial genomes, and briefly describe the resulting system.},
author = {Luciano Antonio Digiampietri and Claudia Bauzer Medeiros and Joao Carlos Setubal},
booktitle = {Proceedings of the Second Brazilian Workshop on Bioinformatics},
date = {2003-12-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/2003_WOB2003.ps},
title = {A data model for comparative genomics.},
year = {2003}
}
We present a simple data model that can be used as a building block in a comparative genomics information system for prokaryotic genomes. The model is extensible and flexible, and has as main entities the organism and the gene family. Existing systems tend to focus either on organisms or in gene families. We have applied the model to a set of eight bacterial genomes, and briefly describe the resulting system.
|
Torres, Ricardo;
Silva, Celmar;
Medeiros, Claudia Bauzer;
Rocha, Heloisa V.
Visual Structures for Image Browsing (conference)
Proc 12th ACM International Conference on Information and Knowledge Management (CIKM03),
ACM Press.,
New Orleans, Lousiana, USA,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Torres2003,
abstract = {Content-Based Image Retrieval (CBIR) presents several challenges and has been subject to extensive research from many domains, such as image processing or database systems. Database researchers are concerned with indexing and querying, whereas image processing experts worry about extracting appropriate image descriptors. Comparatively little work has been done on designing user interfaces for CBIR systems. This, in turn, has a profound effect on these systems since the concept of image similarity is strongly influenced by user perception. This paper describes an initial effort to fill this gap, combining recent research in CBIR and Information Visualization, studied from a Human-Computer Interface perspective. It presents two visualization techniques based on Spiral and Concentric Rings implemented in a CBIR system to explore query results. The approach is centered on keeping user focus on both the query image, and the most similar retrieved images. Experiments conducted so far suggest that the proposed visualization strategies improves system usability.},
address = {New Orleans, Lousiana, USA},
author = {Ricardo Torres and Celmar Silva and Claudia Bauzer Medeiros and Heloisa V. Rocha},
booktitle = {Proc 12th ACM International Conference on Information and Knowledge Management (CIKM03)},
date = {2003-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres03cikm.pdf},
pages = {49-55},
publisher = {ACM Press.},
title = {Visual Structures for Image Browsing},
year = {2003}
}
Content-Based Image Retrieval (CBIR) presents several challenges and has been subject to extensive research from many domains, such as image processing or database systems. Database researchers are concerned with indexing and querying, whereas image processing experts worry about extracting appropriate image descriptors. Comparatively little work has been done on designing user interfaces for CBIR systems. This, in turn, has a profound effect on these systems since the concept of image similarity is strongly influenced by user perception. This paper describes an initial effort to fill this gap, combining recent research in CBIR and Information Visualization, studied from a Human-Computer Interface perspective. It presents two visualization techniques based on Spiral and Concentric Rings implemented in a CBIR system to explore query results. The approach is centered on keeping user focus on both the query image, and the most similar retrieved images. Experiments conducted so far suggest that the proposed visualization strategies improves system usability.
|
Fileto, Renato;
Liu, Ling;
Pu, Calton;
Assad, Eduardo Delgado;
Medeiros, Claudia Bauzer
POESIA: An ontological workflow approach for composing Web services in agriculture (article)
The VLDB Journal The International Journal on Very Large Data Bases,
4,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Article
)
@article{Fileto2003b,
abstract = {This paper describes the POESIA approach to systematic composition of Web services. This pragmatic approach is strongly centered in the use of domain-specific multidimensional ontologies. Inspired by applications needs and founded on ontologies, workflows, and activity models, POESIA provides well-defined operations (aggregation, specialization, and instantiation) to support the composition of Web services. POESIA complements current proposals for Web services definition and composition by providing a higher degree of abstraction with verifiable consistency properties. We illustrate the POESIA approach using a concrete application scenario in agroenvironmental planning.},
author = {Renato Fileto and Ling Liu and Calton Pu and Eduardo Delgado Assad and Claudia Bauzer Medeiros},
date = {2003-11-01},
journal = {The VLDB Journal The International Journal on Very Large Data Bases},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/vldb2003.pdf},
note = {DOI 10.1007/s00778-003-0103-3},
number = {4},
pages = {352-367},
title = {POESIA: An ontological workflow approach for composing Web services in agriculture},
volume = {12},
year = {2003}
}
This paper describes the POESIA approach to systematic composition of Web services. This pragmatic approach is strongly centered in the use of domain-specific multidimensional ontologies. Inspired by applications needs and founded on ontologies, workflows, and activity models, POESIA provides well-defined operations (aggregation, specialization, and instantiation) to support the composition of Web services. POESIA complements current proposals for Web services definition and composition by providing a higher degree of abstraction with verifiable consistency properties. We illustrate the POESIA approach using a concrete application scenario in agroenvironmental planning.
|
Borges, Karla;
Laender, Alberto;
Medeiros, Claudia Bauzer;
Silva, Altigran;
Jr., Clodoveu Davis
The Web as a Data Source for Spatial Databases (conference)
Proc V Brazilian Geoinformatics Symposium GEOINFO 2003,
Campos do Jordao, SP, Brazil,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Borges2003,
abstract = {With the phenomenal growth of the WWW, rich data sources on many different subjects have become available online. Some of these sources store daily facts that often involve textual geographic descriptions. These descriptions can be perceived as indirectly georeferenced data - e.g., addresses, telephone numbers, zip codes and place names. Under this perspective, the Web becomes a large geospatial database, often providing up-to-date local or regional information. In this work we focus on using the Web as an important source of urban geographic information and propose to enhance urban Geographic Information Systems (GIS) using indirectly georeferenced data extracted from the Web. We describe an environment that allows the extraction of geospatial data from Web pages, converts them to XML format, and uploads the converted data into spatial databases for later use in urban GIS. The effectiveness of our approach is demonstrated by a real urban GIS application that uses street addresses as the basis for integrating data from differentWeb sources, combining these data with high-resolution imagery.},
address = {Campos do Jordao, SP, Brazil},
author = {Karla Borges and Alberto Laender and Claudia Bauzer Medeiros and Altigran Silva and Clodoveu Davis Jr.},
booktitle = {Proc V Brazilian Geoinformatics Symposium GEOINFO 2003},
date = {2003-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/geoinfokarla03.pdf},
title = {The Web as a Data Source for Spatial Databases},
year = {2003}
}
With the phenomenal growth of the WWW, rich data sources on many different subjects have become available online. Some of these sources store daily facts that often involve textual geographic descriptions. These descriptions can be perceived as indirectly georeferenced data - e.g., addresses, telephone numbers, zip codes and place names. Under this perspective, the Web becomes a large geospatial database, often providing up-to-date local or regional information. In this work we focus on using the Web as an important source of urban geographic information and propose to enhance urban Geographic Information Systems (GIS) using indirectly georeferenced data extracted from the Web. We describe an environment that allows the extraction of geospatial data from Web pages, converts them to XML format, and uploads the converted data into spatial databases for later use in urban GIS. The effectiveness of our approach is demonstrated by a real urban GIS application that uses street addresses as the basis for integrating data from differentWeb sources, combining these data with high-resolution imagery.
|
Fileto, Renato;
Medeiros, Claudia Bauzer;
Liu, Ling;
Pu, Calton;
Assad, Eduardo
Using Domain Ontologies to help Track Data Provenance (conference)
Proc. Brazilian Database Conference, SBBD 2003,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Fileto2003b,
abstract = {Traditional techniques for tracking data provenance have difficulty adapting to the dynamics of the Web. This paper proposes a scheme for provenance estimation, based on domain ontologies. This scheme is part of the POESIA approach for multi-step integration of semi-structured data. The ontologies used for tracking provenance also help to describe, discover, reuse and integrate data and services. In contrast to traditional techniques, this scheme derives data provenance with fewer annotations at the extensional level and thus lower maintenance costs. Additionally, it promotes the use of ontologies to categorize and correlate scopes of data sets, thereby capturing the operational semantics of data integration processes.},
author = {Renato Fileto and Claudia Bauzer Medeiros and Ling Liu and Calton Pu and Eduardo Assad},
booktitle = {Proc. Brazilian Database Conference, SBBD 2003},
date = {2003-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbbd03.pdf},
pages = {94-98},
title = {Using Domain Ontologies to help Track Data Provenance},
year = {2003}
}
Traditional techniques for tracking data provenance have difficulty adapting to the dynamics of the Web. This paper proposes a scheme for provenance estimation, based on domain ontologies. This scheme is part of the POESIA approach for multi-step integration of semi-structured data. The ontologies used for tracking provenance also help to describe, discover, reuse and integrate data and services. In contrast to traditional techniques, this scheme derives data provenance with fewer annotations at the extensional level and thus lower maintenance costs. Additionally, it promotes the use of ontologies to categorize and correlate scopes of data sets, thereby capturing the operational semantics of data integration processes.
|
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier;
Costa, Luciano da Fontoura
A Graph-based Approach for Multiscale Shape Analysis. (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
IC-03-03,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{daTorres2003b,
abstract = {This paper presents the advantages of computing two recently proposed shape descriptors, multiscale fractal dimension and contour saliences, using the image foresting transform---a graph-based approach to the design of image processing operators. It introduces a robust approach to estimate contour saliences (peaks of high curvature) by exploiting the relation between contour and skeleton. The paper also compares both shape descriptors to fractal dimension, Fourier descriptors, and moment invariants with respect to their invariance to object characteristics that belong to a same class (compact-ability) and to their discriminatory ability to separate objects that belong to distinct classes (separability).},
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão and Luciano da Fontoura Costa},
date = {2003-05-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/03-003.ps},
number = {IC-03-03},
title = {A Graph-based Approach for Multiscale Shape Analysis.},
type = {Technical Report},
year = {2003}
}
This paper presents the advantages of computing two recently proposed shape descriptors, multiscale fractal dimension and contour saliences, using the image foresting transform---a graph-based approach to the design of image processing operators. It introduces a robust approach to estimate contour saliences (peaks of high curvature) by exploiting the relation between contour and skeleton. The paper also compares both shape descriptors to fractal dimension, Fourier descriptors, and moment invariants with respect to their invariance to object characteristics that belong to a same class (compact-ability) and to their discriminatory ability to separate objects that belong to distinct classes (separability).
|
Silva, Wesley Vaz
An Architecture based on Predicate Generation for derivation of Spatial Association Rules (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Silva2003,
abstract = {This thesis proposes and develops models and techniques for the obtention of spa tial association rules. This is based on a two-step process. In the first stage, the geographic database is preprocessed using a knowledge base specified by an expert user to indicate the relationships of interest. This produces a file where data are organized in terms of conventional and spatial predicates. This file can next be processed by standard data mining algorithm s. This simplifies the process of deriving spatial rules to a classical problem of applying traditional association rule mining algorithms. The first step uses two proposed models. The first is the Model of Relational Derivation, whose goal is to identify conventional predicates based on the analysis of descriptive attributes. The second is the Model of Spatial Derivation, responsible for checking spatial relationships among objects and generating spatial predicates, to be subsequently used to derive spatial association rules. A subsequent denormalization algorithm combines conventional and spatial predicates into a single file, used to mine association rules. The main contributions of this work are (i) the specification and validation of a model to derive spatial predicates, (ii) the creation of an architecture that allows obtaining spatial association rules using standard relational mining algorithms (iii) the use of a knowledge base to obtain predicates which are relevant to the user and (iv) the implementation of a prototype.},
author = {Wesley Vaz Silva},
date = {2003-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/wesley_silva.pdf},
school = {Instituto de Computação - Unicamp},
title = {An Architecture based on Predicate Generation for derivation of Spatial Association Rules},
year = {2003}
}
This thesis proposes and develops models and techniques for the obtention of spa tial association rules. This is based on a two-step process. In the first stage, the geographic database is preprocessed using a knowledge base specified by an expert user to indicate the relationships of interest. This produces a file where data are organized in terms of conventional and spatial predicates. This file can next be processed by standard data mining algorithm s. This simplifies the process of deriving spatial rules to a classical problem of applying traditional association rule mining algorithms. The first step uses two proposed models. The first is the Model of Relational Derivation, whose goal is to identify conventional predicates based on the analysis of descriptive attributes. The second is the Model of Spatial Derivation, responsible for checking spatial relationships among objects and generating spatial predicates, to be subsequently used to derive spatial association rules. A subsequent denormalization algorithm combines conventional and spatial predicates into a single file, used to mine association rules. The main contributions of this work are (i) the specification and validation of a model to derive spatial predicates, (ii) the creation of an architecture that allows obtaining spatial association rules using standard relational mining algorithms (iii) the use of a knowledge base to obtain predicates which are relevant to the user and (iv) the implementation of a prototype.
|
Blin, M.-J.;
Medeiros, Claudia Bauzer;
Wainer, Jacques
A Reuse-oriented Workflow Definition Language. (article)
International Journal of Cooperative Information Systems,
1,
2003.
(
Abstract |
BibTeX |
Tags:
Article
)
@article{Blin2003,
abstract = {This paper presents a new formalism for workflow process definition, which combines research in programming languages and in database systems. This formalism is based on creating a library of workflow building blocks, which can be progressively combined and nested to construct complex workflows. Workflows are specified declaratively, using a simple high level language, which allows the dynamic definition of exception handling and events, as well as dynamically overriding workflow definition. This ensures a high degree of flexibility in data and control flow specification, as well as in reuse of workflow specifications to construct other workflows. The resulting workflow execution environment is well suited to supporting cooperative work.},
author = {M.-J. Blin and Claudia Bauzer Medeiros and Jacques Wainer},
date = {2003-03-01},
journal = {International Journal of Cooperative Information Systems},
keyword = {Article},
note = {DOI:10.1142/S0218843003000553},
number = {1},
pages = {1-36},
title = {A Reuse-oriented Workflow Definition Language.},
volume = {12},
year = {2003}
}
This paper presents a new formalism for workflow process definition, which combines research in programming languages and in database systems. This formalism is based on creating a library of workflow building blocks, which can be progressively combined and nested to construct complex workflows. Workflows are specified declaratively, using a simple high level language, which allows the dynamic definition of exception handling and events, as well as dynamically overriding workflow definition. This ensures a high degree of flexibility in data and control flow specification, as well as in reuse of workflow specifications to construct other workflows. The resulting workflow execution environment is well suited to supporting cooperative work.
|
Rocha, Henrique
Metadata for Scientific Workflows to support Environmental Planning (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Rocha2003,
abstract = {Environmental Planning Activities have received great attention in the last years, in response to factors that include the acceleration in population growth and consequent necessity of rational exploration of natural resources. The problems in this domain are complex and the goals may often conflict, demanding cooperation of many kinds of experts from several application domains. The WOODSS (Workflow-based Spatial Decision Support System) system, developed in UNICAMPś Institute of Computing, provides support to environmental planning activities, documenting them by scientific workflows, stored in a database. The focus of this dissertation is the specification of efficient means for managing these workflows. The solution is based on the use of metadata specific to scientific workflows. This solution allows flexible access to environmental plans, using distinct sets of parameters, thereby helping communication among the experts involved, as well as plan maintenance, reuse and evolution. The main contributions of this dissertation are: (1) survey of requirements for documenting environmental planning activities; (2) Proposal of a metadata standard for workflows which document Environmental Planning Activities; (3) Specification of mechanisms to couple this standard to WOODSS; and (4) Partial implementation of the proposal, geared towards system extensibility.},
author = {Henrique Rocha},
date = {2003-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/henrique_rocha.pdf},
school = {Instituto de Computação - Unicamp},
title = {Metadata for Scientific Workflows to support Environmental Planning},
year = {2003}
}
Environmental Planning Activities have received great attention in the last years, in response to factors that include the acceleration in population growth and consequent necessity of rational exploration of natural resources. The problems in this domain are complex and the goals may often conflict, demanding cooperation of many kinds of experts from several application domains. The WOODSS (Workflow-based Spatial Decision Support System) system, developed in UNICAMPś Institute of Computing, provides support to environmental planning activities, documenting them by scientific workflows, stored in a database. The focus of this dissertation is the specification of efficient means for managing these workflows. The solution is based on the use of metadata specific to scientific workflows. This solution allows flexible access to environmental plans, using distinct sets of parameters, thereby helping communication among the experts involved, as well as plan maintenance, reuse and evolution. The main contributions of this dissertation are: (1) survey of requirements for documenting environmental planning activities; (2) Proposal of a metadata standard for workflows which document Environmental Planning Activities; (3) Specification of mechanisms to couple this standard to WOODSS; and (4) Partial implementation of the proposal, geared towards system extensibility.
|
Resende, Silvania Maria
Database-centered Documentation of Environmental Planning Activities (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Resende2003,
abstract = {The environmental planning process is a complex task that covers various aspects, involving a series of steps and is fed by many data sources. Normally, this process demands the cooperation of multidisciplinary teams that discuss many planning alternatives. These alternatives consider, for instance, multiple issues on preservation or recovery of environmental resources. One of the main problems in this process is the lack of associated documentation. As in any cooperative activity, documentation is important for revision, maintenance and evolution of the plan, and for communication among designers. The goal of this dissertation is to partially solve the documentation problem, through the specification and partial implementation of an environment to manage, in a unified way, three kinds of documents. These documents, generated during environmental planning activities, are of three kinds: description of the final product - the plan (WHAT documents), description of the process used to obtain the final product (HOW documents) and description of the reasons behind the decisions of planning (WHY documents). These documents were specified so as to allow them to be stored and managed in a database. WHAT documents are represented through hypermedia structures, HOW documents using scientific workflows, and WHY documents are based in design rationale structures. The main contributions of this research are: (a) database-centered specification and design of the WHY, HOW and WHAT documents; (b) specification of an environment to support management of these documents, thus fostering cooperative work in environmental planning; (c) partial implementation of this environment.},
author = {Silvania Maria Resende},
date = {2003-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/silvania_resende.pdf},
school = {Instituto de Computação - Unicamp},
title = {Database-centered Documentation of Environmental Planning Activities},
year = {2003}
}
The environmental planning process is a complex task that covers various aspects, involving a series of steps and is fed by many data sources. Normally, this process demands the cooperation of multidisciplinary teams that discuss many planning alternatives. These alternatives consider, for instance, multiple issues on preservation or recovery of environmental resources. One of the main problems in this process is the lack of associated documentation. As in any cooperative activity, documentation is important for revision, maintenance and evolution of the plan, and for communication among designers. The goal of this dissertation is to partially solve the documentation problem, through the specification and partial implementation of an environment to manage, in a unified way, three kinds of documents. These documents, generated during environmental planning activities, are of three kinds: description of the final product - the plan (WHAT documents), description of the process used to obtain the final product (HOW documents) and description of the reasons behind the decisions of planning (WHY documents). These documents were specified so as to allow them to be stored and managed in a database. WHAT documents are represented through hypermedia structures, HOW documents using scientific workflows, and WHY documents are based in design rationale structures. The main contributions of this research are: (a) database-centered specification and design of the WHY, HOW and WHAT documents; (b) specification of an environment to support management of these documents, thus fostering cooperative work in environmental planning; (c) partial implementation of this environment.
|
Lima, Joao Guilherme de Souza
Management of heterogeneous climate data for applications in agriculture (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2003.
(
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deLima2003,
author = {Joao Guilherme de Souza Lima},
date = {2003-01-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/LimaJoaoGuilhermedeSouza.pdf},
school = {Instituto de Computação - Unicamp},
title = {Management of heterogeneous climate data for applications in agriculture},
year = {2003}
}
|
Torres, Ricardo da Silva;
Picado, Eduardo Miguéis;
Falcão, Alexandre Xavier;
Costa, Luciano da Fontoura
Effective Image Retrieval by Shape Saliences (conference)
Proceedings of the XVI Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'03),
São Carlos, SP, Brazil,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{daTorres2003,
abstract = {Content-Based Image Retrieval (CBIR) systems have been developed aiming at enabling users to search and retrieve images based on their properties such as shape, color and texture. In this paper, we are concerned with shapebased image retrieval. Here, we discuss a recently proposed shape descriptor, called contour saliences, defined as the influence areas of its higher curvature points. This paper introduces a robust approach to estimate contour saliences by exploiting the relation between a contour and its skeleton, modifies the original definition to include the location and the value of saliences along the contour, and proposes a new metric to compare contour saliences. The paper also evaluates the effectiveness of the proposed descriptor with respect to Fourier Descriptors, Curvature Scale Space and Moment Invariants.},
address = {São Carlos, SP, Brazil},
author = {Ricardo da Silva Torres and Eduardo Miguéis Picado and Alexandre Xavier Falcão and Luciano da Fontoura Costa},
booktitle = {Proceedings of the XVI Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'03)},
date = {2003-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/torres03sibgrapi.pdf},
pages = {167-174},
title = {Effective Image Retrieval by Shape Saliences},
year = {2003}
}
Content-Based Image Retrieval (CBIR) systems have been developed aiming at enabling users to search and retrieve images based on their properties such as shape, color and texture. In this paper, we are concerned with shapebased image retrieval. Here, we discuss a recently proposed shape descriptor, called contour saliences, defined as the influence areas of its higher curvature points. This paper introduces a robust approach to estimate contour saliences by exploiting the relation between a contour and its skeleton, modifies the original definition to include the location and the value of saliences along the contour, and proposes a new metric to compare contour saliences. The paper also evaluates the effectiveness of the proposed descriptor with respect to Fourier Descriptors, Curvature Scale Space and Moment Invariants.
|
Venancio, Lauro Ramos;
Fileto, Renato;
Medeiros, Claudia Bauzer;
Assad, Eduardo
Applying Geographic Object Ontologies to help Navigation in GIS (conference)
Proceedings of the V Brazilian Geoinformatics Symposium (GEOINFO 2003),
Campos do Jordao, SP, Brazil,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Venancio2003,
abstract = {The Semantic Web has become an active research area with many promising applications. This paper gives a concrete contribution to the adoption of Semantic Web technology in GIS, by describing the use of a domain ontology to help navigation on maps, and support the integration of geographic objects on the Web. The OntoCarta system, which we are developing to demonstrate our methods, relies on current standards and public domain tools to build a map navigator including: (1) a viewer for maps in different scales; (2) a domain ontology to describe and correlate maps’ objects. The combination of these components results in a knowledge directed cartographic navigation system. This system supports map zooming, while keeping contextual information for different levels of abstraction. The adoption of open formats to represent the domain ontology, allied to the consensual character of this ontology, enables the use of OntoCarta on a Web browser and fosters data reuse throughout the Internet.},
address = {Campos do Jordao, SP, Brazil},
author = {Lauro Ramos Venancio and Renato Fileto and Claudia Bauzer Medeiros and Eduardo Assad},
booktitle = {Proceedings of the V Brazilian Geoinformatics Symposium (GEOINFO 2003)},
date = {2003-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/geoinfo2003-venanciomedeiros.pdf},
note = {in portuguese},
title = {Applying Geographic Object Ontologies to help Navigation in GIS},
year = {2003}
}
The Semantic Web has become an active research area with many promising applications. This paper gives a concrete contribution to the adoption of Semantic Web technology in GIS, by describing the use of a domain ontology to help navigation on maps, and support the integration of geographic objects on the Web. The OntoCarta system, which we are developing to demonstrate our methods, relies on current standards and public domain tools to build a map navigator including: (1) a viewer for maps in different scales; (2) a domain ontology to describe and correlate maps’ objects. The combination of these components results in a knowledge directed cartographic navigation system. This system supports map zooming, while keeping contextual information for different levels of abstraction. The adoption of open formats to represent the domain ontology, allied to the consensual character of this ontology, enables the use of OntoCarta on a Web browser and fosters data reuse throughout the Internet.
|
Lima, Joao Guilherme Souza;
Medeiros, Claudia Bauzer;
Assad, Eduardo
Integration of Heterogeneous Pluviometric Data For Crop Forecasts (conference)
Proceedings of the V Brazilian Geoinformatics Symposium (GEOINFO 2003),
Campos do Jordao, SP, Brazil,
2003.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Lima2003,
abstract = {Crop forecast is an activity practiced by experts in agriculture, based on large data volumes. These data cover climatological information of the most diverse types, concerning a geographic region and the type of culture. Besides volume, another problem to face concerns data heterogeneity. This paper presents a project for development of a data management system for crop forecasts. The paper is centered in the management of pluviometric data, an important factor in crop management. The system is being implanted by Embrapa, the Brazilian Agricultural Research Corporation, and part of it is already available on the Web.},
address = {Campos do Jordao, SP, Brazil},
author = {Joao Guilherme Souza Lima and Claudia Bauzer Medeiros and Eduardo Assad},
booktitle = {Proceedings of the V Brazilian Geoinformatics Symposium (GEOINFO 2003)},
date = {2003-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/geoinfo2003-limamedeirosassad.pdf},
title = {Integration of Heterogeneous Pluviometric Data For Crop Forecasts},
year = {2003}
}
Crop forecast is an activity practiced by experts in agriculture, based on large data volumes. These data cover climatological information of the most diverse types, concerning a geographic region and the type of culture. Besides volume, another problem to face concerns data heterogeneity. This paper presents a project for development of a data management system for crop forecasts. The paper is centered in the management of pluviometric data, an important factor in crop management. The system is being implanted by Embrapa, the Brazilian Agricultural Research Corporation, and part of it is already available on the Web.
|
2002 |
Cura, Luis Mariano del Val
Um modelo para recuperação por conteudo de imagens de sensoriamento remoto (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
2002.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{delCura2002,
abstract = {Resumo: O problema da recuperação de imagens por conteúdo tem sido uma área de muito interesse nos últimos anos, com múltiplas aplicações em diferentes domínios de geração de imagens. Uma classe de imagem onde este problema não tem sido resolvido satisfatoriamente referese à classe de Sensoriamento Remoto. Imagens de Sensoriamento Remoto (ISR) são obtidas como combinação do sensoriamento da Terra em múltiplas bandas espectrais. Esta tese aborda o problema da recuperação por em conteúdo das ISR . Este tipo de recuperação parte da caracterização do conteúdo de uma imagem e uma das suas principais abordagens considera modelos matemáticos da área de Processamento de Imagens a ser abordada nesta tese. Neste trabalho, abordamos o processo de recuperação de ISR que utilizando três recursos principais: padrões de textura e cor como elemento básico da consulta, uso de múltiplos modelos matemáticos de representação e caracterização do conteúdo e um mecanismo de retroalimentação para o processo de consulta. As principais contribuições da tese são: (1) uma análise dos problemas da recuperação por conteúdo para ISR; (2) a proposta de um modelo para esta recuperação; (3) um modelo e métrica de similaridade baseado no modelo proposto; (4) proposta de implementação do processamento das consultas que mostra a viabilidade do modelo Abstract: Content-based retrieval of images is a topic of growing interest given us multiple applications. One kind of images that have not yet been dealt with satisfactorily are the so-called Remote Sensing Images. Remote Sensing Images (RSI) are a especial type of image, created by combination of sensoring on different spectral bands . This work deals with the problem of content-based retrieval of Remote Sensiong Images(RSI). It uses the image retrieval approach based on content representation models from image processing area. This work presents a content-based image retrieval model for RSI, based on three main features: patterns of color and texture as basic query concept, use of multiple content representation modelsanda feedback Televance machanism. The main contributions os these work are: (1) an analysis of content-based RSI pro,. blems; (2) a proposal of a model for RSI retrieval; (3) a proposal of model and metric for similarity measure; (4) a proposal of algorithm for processing of content-based queries.},
author = {Luis Mariano del Val Cura},
date = {2002-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ValCuraLuisMarianodel.pdf},
school = {Instituto de Computação - Unicamp},
title = {Um modelo para recuperação por conteudo de imagens de sensoriamento remoto},
year = {2002}
}
Resumo: O problema da recuperação de imagens por conteúdo tem sido uma área de muito interesse nos últimos anos, com múltiplas aplicações em diferentes domínios de geração de imagens. Uma classe de imagem onde este problema não tem sido resolvido satisfatoriamente referese à classe de Sensoriamento Remoto. Imagens de Sensoriamento Remoto (ISR) são obtidas como combinação do sensoriamento da Terra em múltiplas bandas espectrais. Esta tese aborda o problema da recuperação por em conteúdo das ISR . Este tipo de recuperação parte da caracterização do conteúdo de uma imagem e uma das suas principais abordagens considera modelos matemáticos da área de Processamento de Imagens a ser abordada nesta tese. Neste trabalho, abordamos o processo de recuperação de ISR que utilizando três recursos principais: padrões de textura e cor como elemento básico da consulta, uso de múltiplos modelos matemáticos de representação e caracterização do conteúdo e um mecanismo de retroalimentação para o processo de consulta. As principais contribuições da tese são: (1) uma análise dos problemas da recuperação por conteúdo para ISR; (2) a proposta de um modelo para esta recuperação; (3) um modelo e métrica de similaridade baseado no modelo proposto; (4) proposta de implementação do processamento das consultas que mostra a viabilidade do modelo Abstract: Content-based retrieval of images is a topic of growing interest given us multiple applications. One kind of images that have not yet been dealt with satisfactorily are the so-called Remote Sensing Images. Remote Sensing Images (RSI) are a especial type of image, created by combination of sensoring on different spectral bands . This work deals with the problem of content-based retrieval of Remote Sensiong Images(RSI). It uses the image retrieval approach based on content representation models from image processing area. This work presents a content-based image retrieval model for RSI, based on three main features: patterns of color and texture as basic query concept, use of multiple content representation modelsanda feedback Televance machanism. The main contributions os these work are: (1) an analysis of content-based RSI pro,. blems; (2) a proposal of a model for RSI retrieval; (3) a proposal of model and metric for similarity measure; (4) a proposal of algorithm for processing of content-based queries.
|
Baranauskas, Maria Cecilia;
Schimiguel, Juliano;
Prado, A. B.
The Computer as Medium for Expression of Geographical Information: Drawbacks and Challenges (conference)
ECCE-11 Eleventh European Conference on Cognitive Ergonomics: Cognition, Culture and Design,
Italy,
2002.
(
BibTeX |
Tags:
Conference
)
@conference{Baranauskas2002b,
address = {Italy},
author = {Maria Cecilia Baranauskas and Juliano Schimiguel and A. B. Prado},
booktitle = {ECCE-11 Eleventh European Conference on Cognitive Ergonomics: Cognition, Culture and Design},
date = {2002-09-01},
keyword = {Conference},
title = {The Computer as Medium for Expression of Geographical Information: Drawbacks and Challenges},
year = {2002}
}
|
Baranauskas, Maria Cecilia;
Schimiguel, Juliano
Evaluating Signs in Interfaces for Geographic Information Systems (conference)
Conferencia Iberoamericana en Sistemas, Cibernetica e Informatica,
{Orlando, Fla, USA}}, keywords = {Conference},
2002.
(
Links |
BibTeX |
Tags:
)
@conference{Baranauskas2002,
address = {{Orlando, Fla, USA}}, keywords = {Conference}},
author = {Maria Cecilia Baranauskas and Juliano Schimiguel},
booktitle = {Conferencia Iberoamericana en Sistemas, Cibernetica e Informatica},
date = {2002-07-01},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ciscibaranauskas.pdf},
title = {Evaluating Signs in Interfaces for Geographic Information Systems},
year = {2002}
}
|
Sasaoka, Liliana
Access Control in Geographic Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2002.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Sasaoka2002,
abstract = {The access control problem in databases consists in determining when (and if) users or applications (WHO) can access stored data (WHAT), and what kind of access (HOW) they are allowed. Most of the research in this area is geared towards management of relational data, for commercial applications. The objective of this thesis is to study this problem for geographic databases, where constraints imposed on access control management must consider the spatial location context. The main contributions of this work are: (a) overview of requirement analysis for access control in geographic databases; (b) definition of an authorization model based in spatial characterization; (c) discussion of the implementation aspects of this model (d) analysis of how this proposal can be adopted by a large scale telecommunications AM/FM spatial application, the SAGRE System. Sagre is an outside plant management geographic information system, developed at CPqD foundation, in use in most telephone operator service providers in Brazil.},
author = {Liliana Sasaoka},
date = {2002-06-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/liliana_sasaoka.pdf},
school = {Instituto de Computação - Unicamp},
title = {Access Control in Geographic Databases},
year = {2002}
}
The access control problem in databases consists in determining when (and if) users or applications (WHO) can access stored data (WHAT), and what kind of access (HOW) they are allowed. Most of the research in this area is geared towards management of relational data, for commercial applications. The objective of this thesis is to study this problem for geographic databases, where constraints imposed on access control management must consider the spatial location context. The main contributions of this work are: (a) overview of requirement analysis for access control in geographic databases; (b) definition of an authorization model based in spatial characterization; (c) discussion of the implementation aspects of this model (d) analysis of how this proposal can be adopted by a large scale telecommunications AM/FM spatial application, the SAGRE System. Sagre is an outside plant management geographic information system, developed at CPqD foundation, in use in most telephone operator service providers in Brazil.
|
Schimiguel, Juliano
3D GIS Application Interfaces considered as Communication Spaces (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2002.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Schimiguel2002,
abstract = {A Geographical Information System (GIS) is a system that deals with manipulation, administration and visualization of geo-referenced data. The term geo-referenced denotes data that possess representation in a system of geographical coordinates. A GIS allows the creation of applications for specific domains, such as urban and environmental planning. An application involves data, algorithms, functions and visualization (application interface). There are two GIS interface categories: 2DGIS and 3DGIS. In our work, we are particularly interested in the latter. 2DGIS are restricted to the 2D representation of space. 3DGIS allow the creation of interfaces for applications that raise the geographical visualization to a higher level of visual reality. Visual reality, in this context, refers to the vision that a human being has of the real world. Despite their facilities for manipulation of geographical data, GIS presuppose application designers have specific knowledge of all aspects of the technology of the system, thus restricting its use only to people involved in that domain. There is a series of conceptual problems that create a gap between a GIS and the reality noticed by application designers. These problems start with the design of the interface of those tools. This hampers the development process of 3D interfaces for GIS applications. The objective of this work is the study and evaluation of the modelling of 3D interfaces for GIS applications. A case study on ArcView GIS 3D Analyst illustrates this study. As a way of dealing with the problem, we propose the use of a specific semiotics-based methodology denominated Communication Space, for modelling 3D GIS application interfaces. Semiotics allows dealing with application interface entities as if they were elements which communicate a meaning, enabling the designer to capture inconsistencies that are important in the (re)design of the 3D interface. The adopted methodology served as a basis to develop an interface layer on ArcView GIS 3D Analyst, denominated EComSIG. The objective of EComSIG is to hide the inherent complexity of the modelling of 3D interfaces for GIS applications and, at the same time, to systematize the process of designing 3D interfaces for such applications. The contributions of the work are of two natures: (i) theoretical: providing the study of interface aspects for Geographical Information Systems, considering its semiotic aspects and (ii) applied: the development of a prototype to evaluate the relevance of the proposed solution.},
author = {Juliano Schimiguel},
date = {2002-02-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/juliano_schimiguel.pdf},
school = {Instituto de Computação - Unicamp},
title = {3D GIS Application Interfaces considered as Communication Spaces},
year = {2002}
}
A Geographical Information System (GIS) is a system that deals with manipulation, administration and visualization of geo-referenced data. The term geo-referenced denotes data that possess representation in a system of geographical coordinates. A GIS allows the creation of applications for specific domains, such as urban and environmental planning. An application involves data, algorithms, functions and visualization (application interface). There are two GIS interface categories: 2DGIS and 3DGIS. In our work, we are particularly interested in the latter. 2DGIS are restricted to the 2D representation of space. 3DGIS allow the creation of interfaces for applications that raise the geographical visualization to a higher level of visual reality. Visual reality, in this context, refers to the vision that a human being has of the real world. Despite their facilities for manipulation of geographical data, GIS presuppose application designers have specific knowledge of all aspects of the technology of the system, thus restricting its use only to people involved in that domain. There is a series of conceptual problems that create a gap between a GIS and the reality noticed by application designers. These problems start with the design of the interface of those tools. This hampers the development process of 3D interfaces for GIS applications. The objective of this work is the study and evaluation of the modelling of 3D interfaces for GIS applications. A case study on ArcView GIS 3D Analyst illustrates this study. As a way of dealing with the problem, we propose the use of a specific semiotics-based methodology denominated Communication Space, for modelling 3D GIS application interfaces. Semiotics allows dealing with application interface entities as if they were elements which communicate a meaning, enabling the designer to capture inconsistencies that are important in the (re)design of the 3D interface. The adopted methodology served as a basis to develop an interface layer on ArcView GIS 3D Analyst, denominated EComSIG. The objective of EComSIG is to hide the inherent complexity of the modelling of 3D interfaces for GIS applications and, at the same time, to systematize the process of designing 3D interfaces for such applications. The contributions of the work are of two natures: (i) theoretical: providing the study of interface aspects for Geographical Information Systems, considering its semiotic aspects and (ii) applied: the development of a prototype to evaluate the relevance of the proposed solution.
|
Torres, Ricardo da Silva;
Falcão, Alexandre Xavier;
Costa, Luciano da Fontoura
Shape Description by Image Foresting Transform (conference)
14th International Conference on Digital Signal Processing,
2002.
(
BibTeX |
Tags:
Conference
)
@conference{daTorres2002,
author = {Ricardo da Silva Torres and Alexandre Xavier Falcão and Luciano da Fontoura Costa},
booktitle = {14th International Conference on Digital Signal Processing},
date = {2002-01-01},
keyword = {Conference},
pages = {1089-1092},
title = {Shape Description by Image Foresting Transform},
volume = {2},
year = {2002}
}
|
Yi, Bei;
Medeiros, Claudia Bauzer
A Data Model for Mobile Objects (conference)
2002.
(
BibTeX |
Tags:
Conference
)
@conference{Yi2002,
author = {Bei Yi and Claudia Bauzer Medeiros},
date = {2002-01-01},
keyword = {Conference},
note = {In Portuguese},
pages = {33-40},
title = {A Data Model for Mobile Objects},
year = {2002}
}
|
Silva, Wesley Vaz;
Magalhaes, Geovane Cayres
A Model for Transforming Spatial Relationships into Semantically Equivalent Relational Tuples (conference)
IV Brazilian Geoinformatics Symposium,
2002.
(
BibTeX |
Tags:
Conference
)
@conference{Silva2002,
author = {Wesley Vaz Silva and Geovane Cayres Magalhaes},
booktitle = {IV Brazilian Geoinformatics Symposium},
date = {2002-01-01},
keyword = {Conference},
note = {In Portuguese},
pages = {57-66},
title = {A Model for Transforming Spatial Relationships into Semantically Equivalent Relational Tuples},
year = {2002}
}
|
Medeiros, Claudia Bauzer
Advanced Geographic Information Systems (article)
Encyclopedia of Life Support Systems, EOLSS,
Eolss Publishers,
2002.
(
Abstract |
BibTeX |
Tags:
Inbook
)
@article{Medeiros2002b,
abstract = {Chapter in Encyclopedia of Life Support Systems, EOLSS},
author = {Claudia Bauzer Medeiros},
booktitle = {Encyclopedia of Life Support Systems, EOLSS},
chapter = {Advanced Geographic Information Systems},
date = {2002-01-01},
keyword = {Inbook},
note = {Developed under the auspices of the UNESCO},
pages = {40 pages},
publisher = {Eolss Publishers},
title = {Advanced Geographic Information Systems},
year = {2002}
}
Chapter in Encyclopedia of Life Support Systems, EOLSS
|
Medeiros, Claudia Bauzer
Spatio-temporal Information Systems (inbook)
Encyclopedia of Life Support Systems, EOLSS,
Eolss Publishers,
inbook,
2002.
(
Abstract |
BibTeX |
Tags:
Inbook
)
@inbook{Medeiros2002,
abstract = {Chapter in Encyclopedia of Life Support Systems, EOLSS},
author = {Claudia Bauzer Medeiros},
booktitle = {Encyclopedia of Life Support Systems, EOLSS},
chapter = {Spatio-temporal Information Systems},
date = {2002-01-01},
keyword = {Inbook},
note = {Developed under the auspices of the UNESCO},
pages = {40 pages},
publisher = {Eolss Publishers},
title = {Spatio-temporal Information Systems},
year = {2002}
}
Chapter in Encyclopedia of Life Support Systems, EOLSS
|
2001 |
Kaster, Daniel
Combining Databases and Case Based Reasoning for Decision Support in Environmental Planning (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2001.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Kaster2001,
abstract = {Environmental planning takes advantage of Spatial Decision Support Systems (SDSS) for problem solving. These softwares supply integrated frameworks which permit users to deal with data and models in analysis and simulation tasks. However, they usually provide generic models which need to be specialized to fit particular situations. Since this process requires considerable effort and expertise, it is crucial to allow planners to profit from others' experience. The goal of this dissertation is to develop mechanisms to help environmental planners to solve problems incrementally. The solution presented here consists in coupling Case-based Reasoning (CBR) to the WOODSS spatial decision support system (WOrkflOw-based spatial Decision Support System), developed at the LIS laboratory at the Institute of Computing, UNICAMP. WOODSS interacts with a Geographical Information System and provides model handling facilities, documenting them by means of scientific workflows. The focus of this work is on specifying and implementing new model storing and retrieval modules for WOODSS, using CBR techniques. The main contributions of this research are: (a) requirement eliciting for using CBR in environmental decision support; (b) development of model management algorithms founded on CBR; and (c) extension of the WOODSS system, making it more suitable for problem solving from precedent cases.},
author = {Daniel Kaster},
date = {2001-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/KasterDanieldosSantos.pdf},
school = {Instituto de Computação - Unicamp},
title = {Combining Databases and Case Based Reasoning for Decision Support in Environmental Planning},
year = {2001}
}
Environmental planning takes advantage of Spatial Decision Support Systems (SDSS) for problem solving. These softwares supply integrated frameworks which permit users to deal with data and models in analysis and simulation tasks. However, they usually provide generic models which need to be specialized to fit particular situations. Since this process requires considerable effort and expertise, it is crucial to allow planners to profit from others' experience. The goal of this dissertation is to develop mechanisms to help environmental planners to solve problems incrementally. The solution presented here consists in coupling Case-based Reasoning (CBR) to the WOODSS spatial decision support system (WOrkflOw-based spatial Decision Support System), developed at the LIS laboratory at the Institute of Computing, UNICAMP. WOODSS interacts with a Geographical Information System and provides model handling facilities, documenting them by means of scientific workflows. The focus of this work is on specifying and implementing new model storing and retrieval modules for WOODSS, using CBR techniques. The main contributions of this research are: (a) requirement eliciting for using CBR in environmental decision support; (b) development of model management algorithms founded on CBR; and (c) extension of the WOODSS system, making it more suitable for problem solving from precedent cases.
|
Schimiguel, Juliano;
Baranauskas, Maria Cecilia Calani;
Medeiros, Claudia Bauzer
Modelling the Interface of Geographical Information Systems Applications as Spaces of Communication (conference)
Proceedings IV Brazilian Workshop on Human Factors in Computer-based Systems,
Florianopolis, SC, Brazil,
2001.
(
BibTeX |
Tags:
Conference
)
@conference{Schimiguel2001b,
address = {Florianopolis, SC, Brazil},
author = {Juliano Schimiguel and Maria Cecilia Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proceedings IV Brazilian Workshop on Human Factors in Computer-based Systems},
date = {2001-01-01},
keyword = {Conference},
note = {in portuguese},
pages = {157-168},
title = {Modelling the Interface of Geographical Information Systems Applications as Spaces of Communication},
year = {2001}
}
|
Schimiguel, Juliano;
Baranauskas, Maria Cecilia Calani;
Medeiros, Claudia Bauzer
The Space of Communication as a metaphor in the design of SIG3D Applications (in Portuguese) (conference)
Proceedings of the IV Brazilian Workshop on Virtual Reality,
Florianopolis, SC, Brazil,
2001.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Schimiguel2001,
address = {Florianopolis, SC, Brazil},
author = {Juliano Schimiguel and Maria Cecilia Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the IV Brazilian Workshop on Virtual Reality},
date = {2001-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/espacoscom.pdf},
pages = {353-354},
title = {The Space of Communication as a metaphor in the design of SIG3D Applications (in Portuguese)},
year = {2001}
}
|
Peerbocus, A.;
Medeiros, Claudia Bauzer;
Voisard, Agnes;
Jomier, G.
Documenting Changes in a Spatiotemporal Database (conference)
Proceedings of the XVI Brazilian Symposium on Database Systems,
2001.
(
BibTeX |
Tags:
Conference
)
@conference{Peerbocus2001,
author = {A. Peerbocus and Claudia Bauzer Medeiros and Agnes Voisard and G. Jomier},
booktitle = {Proceedings of the XVI Brazilian Symposium on Database Systems},
date = {2001-01-01},
keyword = {Conference},
pages = {10-24},
title = {Documenting Changes in a Spatiotemporal Database},
year = {2001}
}
|
Medeiros, Claudia Bauzer
Spatio-temporal database systems: foundations and applications (conference)
Proceedings of the VI Regional Informatics School,
Sao Carlos, SP, Brazil,
2001.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros2001,
address = {Sao Carlos, SP, Brazil},
author = {Claudia Bauzer Medeiros},
booktitle = {Proceedings of the VI Regional Informatics School},
date = {2001-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/eri-1.pdf},
pages = {241-255},
title = {Spatio-temporal database systems: foundations and applications},
year = {2001}
}
|
Fileto, Renato;
Meira, C. A. A.;
Neto, A. Seixas;
Naka, J.;
Medeiros, Claudia Bauzer
An XML-Centered Warehouse to Manage Information of the Fruit Supply Chain (conference)
Proceedings of The World Conference on Computer in Agriculture and Natural Resources,
2001.
(
BibTeX |
Tags:
Conference
)
@conference{Fileto2001,
author = {Renato Fileto and C. A. A. Meira and A. Seixas Neto and J. Naka and Claudia Bauzer Medeiros},
booktitle = {Proceedings of The World Conference on Computer in Agriculture and Natural Resources},
date = {2001-01-01},
keyword = {Conference},
title = {An XML-Centered Warehouse to Manage Information of the Fruit Supply Chain},
year = {2001}
}
|
2000 |
Matias, Sandro
Query Processing in the BIOTA Biodiversity Database (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Matias2000,
abstract = {SINBIOTASP is the biodiversity information system being developed as part of the BIOTA/FAPESP program. This thesis is focused in the implementation issues of the query processing of the SINBIOTASP system. This subject presents many challenges in the formulation and the processing of the queries, due to the variety and the volume of the data and to the wide range of system user profiles. The main contributions of this work are: a survey of the query processing features of many environmental information systems on the Web; a systematization of the query types which are typical of biodiversity application, considering processing and interface criteria; and specification of a basic set of spatial operators, as well general query interfaces, involving maps and textual data, in the context of biodiversity environmental information systems. As a final contribution, this analysis was validated by the development of the Species Mapper module of SINBIOTASP, which allows Web query processing on collection and distribution of species.},
author = {Sandro Matias},
date = {2000-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/MatiasSandrodePaula.pdf},
school = {Instituto de Computação - Unicamp},
title = {Query Processing in the BIOTA Biodiversity Database},
year = {2000}
}
SINBIOTASP is the biodiversity information system being developed as part of the BIOTA/FAPESP program. This thesis is focused in the implementation issues of the query processing of the SINBIOTASP system. This subject presents many challenges in the formulation and the processing of the queries, due to the variety and the volume of the data and to the wide range of system user profiles. The main contributions of this work are: a survey of the query processing features of many environmental information systems on the Web; a systematization of the query types which are typical of biodiversity application, considering processing and interface criteria; and specification of a basic set of spatial operators, as well general query interfaces, involving maps and textual data, in the context of biodiversity environmental information systems. As a final contribution, this analysis was validated by the development of the Species Mapper module of SINBIOTASP, which allows Web query processing on collection and distribution of species.
|
Kaster, Daniel;
Rocha, Heloisa V.;
Medeiros, Claudia Bauzer
Case-based Reasoning applied to Environmental Modeling with GIS (conference)
Proceedings of First International Conference on Geographic Information Science (GIScience2000),
Georgia, USA,
2000.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Kaster2000,
address = {Georgia, USA},
author = {Daniel Kaster and Heloisa V. Rocha and Claudia Bauzer Medeiros},
booktitle = {Proceedings of First International Conference on Geographic Information Science (GIScience2000)},
date = {2000-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/giscience00.pdf},
note = {extended abstract},
title = {Case-based Reasoning applied to Environmental Modeling with GIS},
year = {2000}
}
|
Voisard, Agnes;
Medeiros, Claudia Bauzer;
Jomier, Genevieve
Database Support for Cooperative Work Documentation (conference)
Proceedings of COOP'2000,
France,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Voisard2000,
abstract = {Technological changes impose a constant evolution on all kinds of artifacts, and require new solutions for their efficient maintenance. Appropriate documentation is considered fundamental for maintenance and evolution. This situation is even more crucial when one considers today's cooperative environments for designing and developing artifacts. Most of the time, documentation is static and describes what an artifact is, and sometimes how it was designed and constructed. Moreover, in collaborative work, documentation serves as one of the communication means among all involved in creating an artifact. However, several other types of documentation needs have been identified in many domains -- e.g., medicine, engineering, biology or astronomy -- such as flexible versioning for keeping track of an artifact's entire evolution, as well as documentation for the reasoning (the why) behind its construction. Unfortunately, no comprehensive system exists to handle all these documentation requirements: each kind of document is managed by a separate system, and furthermore studied in a different Computer Science field. what documentation may fall within database or software engineering research, whereas how is often restricted to hypermedia systems and CSCW, and why is handled in the context of Artificial Intelligence and cognitive science. This paper presents a unified framework to manage all these kinds of documents within a single database, for engineering artifacts. This allows integrating and coordinating the (cooperative) work of different types of users of these artifacts: designers, customers, salespeople, constructors. This eliminates the break in continuity found in normal environments, where each kind of documentation is handled separately and uses distinct implementation paradigms. Our framework is exemplified in the context of software module configuration.},
address = {France},
author = {Agnes Voisard and Claudia Bauzer Medeiros and Genevieve Jomier},
booktitle = {Proceedings of COOP'2000},
date = {2000-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/coop2000.pdf},
title = {Database Support for Cooperative Work Documentation},
year = {2000}
}
Technological changes impose a constant evolution on all kinds of artifacts, and require new solutions for their efficient maintenance. Appropriate documentation is considered fundamental for maintenance and evolution. This situation is even more crucial when one considers today's cooperative environments for designing and developing artifacts. Most of the time, documentation is static and describes what an artifact is, and sometimes how it was designed and constructed. Moreover, in collaborative work, documentation serves as one of the communication means among all involved in creating an artifact. However, several other types of documentation needs have been identified in many domains -- e.g., medicine, engineering, biology or astronomy -- such as flexible versioning for keeping track of an artifact's entire evolution, as well as documentation for the reasoning (the why) behind its construction. Unfortunately, no comprehensive system exists to handle all these documentation requirements: each kind of document is managed by a separate system, and furthermore studied in a different Computer Science field. what documentation may fall within database or software engineering research, whereas how is often restricted to hypermedia systems and CSCW, and why is handled in the context of Artificial Intelligence and cognitive science. This paper presents a unified framework to manage all these kinds of documents within a single database, for engineering artifacts. This allows integrating and coordinating the (cooperative) work of different types of users of these artifacts: designers, customers, salespeople, constructors. This eliminates the break in continuity found in normal environments, where each kind of documentation is handled separately and uses distinct implementation paradigms. Our framework is exemplified in the context of software module configuration.
|
Alencar, Alexandre Carvalho de
Data Quality in Geographic Applications (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deAlencar2000,
abstract = {One of the main goals of GIS is to help decision makers in carrying out their tasks for situations where the spatial dimension is relevant -- e.g., in urban or environmental planning activities. The quality of the decisions, however, is intimately dependent on the quality of the geographical data used. This is usually ignored by decision makers, who limit themselves to relying on the correct operation of the equipment used to collect data or on the GIS where the applications are developed. The goal of this dissertation is to fill this gap, by presenting an analysis of the theme data quality in the context of geographic applications. This analysis ranges from the stage of data capture to the presentation of the result of the applications and the interpretation taken by the user for decision making. Besides an extensive bibliographic survey, other contributions of this work include the suggestion of a basic group of criteria to evaluate this quality, and an analysis of how these criteria can be met. Finally, part of these suggestions were implemented in a tool coupled to a GIS, which allows users to visualize data quality information.},
author = {Alexandre Carvalho de Alencar},
date = {2000-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/AlencarAlexandreCarvalhode.pdf},
school = {Instituto de Computação - Unicamp},
title = {Data Quality in Geographic Applications},
year = {2000}
}
One of the main goals of GIS is to help decision makers in carrying out their tasks for situations where the spatial dimension is relevant -- e.g., in urban or environmental planning activities. The quality of the decisions, however, is intimately dependent on the quality of the geographical data used. This is usually ignored by decision makers, who limit themselves to relying on the correct operation of the equipment used to collect data or on the GIS where the applications are developed. The goal of this dissertation is to fill this gap, by presenting an analysis of the theme data quality in the context of geographic applications. This analysis ranges from the stage of data capture to the presentation of the result of the applications and the interpretation taken by the user for decision making. Besides an extensive bibliographic survey, other contributions of this work include the suggestion of a basic group of criteria to evaluate this quality, and an analysis of how these criteria can be met. Finally, part of these suggestions were implemented in a tool coupled to a GIS, which allows users to visualize data quality information.
|
Gatti, Sandro Danilo
Factors that Affect the Performance of Spatial Join Methods: a Study Based on Real Data (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Gatti2000,
abstract = {Synchronized tree traversal join methods for spatial access methods were analysed. The factors considered included bufferpool size, page size, intermediate join indexes ordering criteria, bufferpool page replacement policies, among others. This analysis was based on real data taken from a GIS application for telecommunications, indexed on a R*-tree. Results of this work assess the way those factors affect spatial join performance and can be used for tuning such methods.},
author = {Sandro Danilo Gatti},
date = {2000-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/GattiSandroDanilo.pdf},
school = {Instituto de Computação - Unicamp},
title = {Factors that Affect the Performance of Spatial Join Methods: a Study Based on Real Data},
year = {2000}
}
Synchronized tree traversal join methods for spatial access methods were analysed. The factors considered included bufferpool size, page size, intermediate join indexes ordering criteria, bufferpool page replacement policies, among others. This analysis was based on real data taken from a GIS application for telecommunications, indexed on a R*-tree. Results of this work assess the way those factors affect spatial join performance and can be used for tuning such methods.
|
Cura, Luis Mariano del Val;
Leite, Neucimar Jeronimo;
Medeiros, Claudia Bauzer
An Architecture for Content-based Retrieval of Remote Sensing Images (conference)
Proceedings of the IEEE International Conference on Multimedia and Expo,
New York, USA,
2000.
(
BibTeX |
Tags:
Conference
)
@conference{delCura2000,
address = {New York, USA},
author = {Luis Mariano del Val Cura and Neucimar Jeronimo Leite and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the IEEE International Conference on Multimedia and Expo},
date = {2000-01-01},
keyword = {Conference},
title = {An Architecture for Content-based Retrieval of Remote Sensing Images},
year = {2000}
}
|
Oliveira, Juliano Lopes de;
Medeiros, Claudia Bauzer
A Software Architecture Framework for Geographic User Interfaces (conference)
Proceedings of the International Workshop on Emerging Technologies for Geo-Based Applications,
Ascona, Switzerland,
2000.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{deOliveira2000,
address = {Ascona, Switzerland},
author = {Juliano Lopes de Oliveira and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the International Workshop on Emerging Technologies for Geo-Based Applications},
date = {2000-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ascona00.pdf},
pages = {233-248},
title = {A Software Architecture Framework for Geographic User Interfaces},
year = {2000}
}
|
Prado, Alysson B.;
Baranauskas, Maria Cecilia Calani;
Medeiros, Claudia Bauzer
Cartography and Geographic Information Systems as Semiotic Systems (conference)
Proceedings of the 8th ACM GIS International Symposium,
Washington D.C., USA,
2000.
(
BibTeX |
Tags:
Conference
)
@conference{Prado2000,
address = {Washington D.C., USA},
author = {Alysson B. Prado and Maria Cecilia Calani Baranauskas and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the 8th ACM GIS International Symposium},
date = {2000-01-01},
keyword = {Conference},
title = {Cartography and Geographic Information Systems as Semiotic Systems},
year = {2000}
}
|
Medeiros, Claudia Bauzer;
Bellosta, Marie-Jo;
Jomier, Genevieve
Multiversion Views: constructing views in a multiversion database (article)
Data and Knowledge Engineering,
2000.
(
Links |
BibTeX |
Tags:
Article
)
@article{Medeiros2000b,
author = {Claudia Bauzer Medeiros and Marie-Jo Bellosta and Genevieve Jomier},
date = {2000-01-01},
journal = {Data and Knowledge Engineering},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DKEviews.pdf},
pages = {277-306},
title = {Multiversion Views: constructing views in a multiversion database},
volume = {33},
year = {2000}
}
|
Medeiros, Claudia Bauzer;
Salgado, Ana Carolina
Uma proposta de plano pedagógico para a matéria de bancos de dados (Teaching database courses) (conference)
Anais, Curso de Qualidade, SBC,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros2000,
abstract = {O ensino de bancos de dados pode ter diferentes enfoques, dependendo do publico alvo e do objetivo do treinamento. Este txto analisa algumas das diferentes opções existetnes para ensino tanto de graduação quanto em cursos de pós-graduação em Computação. A tonica adotada é que a matéria aprsentada - bancos de dados - pode ser integradora de diferentes áreas de Computação. Constitui, assim, um elo de ligação entre vários conceitos apresentados em diferentes disciplinas do curso.},
author = {Claudia Bauzer Medeiros and Ana Carolina Salgado},
booktitle = {Anais, Curso de Qualidade, SBC},
date = {2000-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/ensinobdados.pdf},
title = {Uma proposta de plano pedagógico para a matéria de bancos de dados (Teaching database courses)},
year = {2000}
}
O ensino de bancos de dados pode ter diferentes enfoques, dependendo do publico alvo e do objetivo do treinamento. Este txto analisa algumas das diferentes opções existetnes para ensino tanto de graduação quanto em cursos de pós-graduação em Computação. A tonica adotada é que a matéria aprsentada - bancos de dados - pode ser integradora de diferentes áreas de Computação. Constitui, assim, um elo de ligação entre vários conceitos apresentados em diferentes disciplinas do curso.
|
Kaster, Daniel;
Rocha, Heloisa V.;
Medeiros, Claudia Bauzer
Applying Case-based Reasoning to Environmental Decision Support Systems (conference)
Proceedings of the II Brazilian Geoinformatics Workshop (GeoInfo),
Sao Paulo, Brazil,
2000.
(
BibTeX |
Tags:
Conference
)
@conference{Kaster2000b,
address = {Sao Paulo, Brazil},
author = {Daniel Kaster and Heloisa V. Rocha and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the II Brazilian Geoinformatics Workshop (GeoInfo)},
date = {2000-01-01},
keyword = {Conference},
note = {in portuguese},
title = {Applying Case-based Reasoning to Environmental Decision Support Systems},
year = {2000}
}
|
Fagundes, Andreia;
Medeiros, Claudia Bauzer
Implementing a Metadata Database for an Environmental Information System (conference)
Proceedings of the XV Brazilian Database Symposium,
SBC,
2000.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Fagundes2000,
abstract = {This paper presents the modeling and implementation aspects of a metadata database for the information system of the BIOTA/FAPESP research program. This program's goal is the cooperation among biodiversity researchers in the State of Sao Paulo, Brazil, thus helping to maintain and create environmental protection programs within the Stat e. The information system, under development, shall integrate data from the different research groups and foster the dissemination of their work. This information system is unique in several aspects, including the diversity of data managed, and the spectrum of users. The metadata database is the system's component responsible for the high level description of the various biodiversity data gathered by the researchers. This paper concentrates on the metadata standard developed for this information system, and on the database implementation aspects.},
author = {Andreia Fagundes and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the XV Brazilian Database Symposium},
date = {2000-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/fagundes.pdf},
note = {also in SIGMOD DISC},
publisher = {SBC},
title = {Implementing a Metadata Database for an Environmental Information System},
year = {2000}
}
This paper presents the modeling and implementation aspects of a metadata database for the information system of the BIOTA/FAPESP research program. This program's goal is the cooperation among biodiversity researchers in the State of Sao Paulo, Brazil, thus helping to maintain and create environmental protection programs within the Stat e. The information system, under development, shall integrate data from the different research groups and foster the dissemination of their work. This information system is unique in several aspects, including the diversity of data managed, and the spectrum of users. The metadata database is the system's component responsible for the high level description of the various biodiversity data gathered by the researchers. This paper concentrates on the metadata standard developed for this information system, and on the database implementation aspects.
|
1999 |
Fagundes, Andreia da Silva
Design and Implementation of a Metadata Database for the Biodiversity Information System of the State of Sao Paulo (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1999.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{daFagundes1999,
abstract = {This dissertation presents the design and implementation of the metadata database of the information system for the BIOTA/FAPESP program. This is a long term scientific program that aims the establishment of a common basis for cooperation among different researchers on biodiversity and the dissemination of their work, to give subsidies to the creation of environmental preservation programs in the State of São Paulo. The metadata database is the system component responsible for the high level description of several biodiversity data collected by researchers. This dissertation discusses different aspects of the development of this database, situating it in the context of a biodiversity information system. The main contributions presented are: a) survey of several proposals for metadata standards, for environmental data; b) proposal of a metadata standard for the biodiversity information system that encompasses other proposals and extends them in order to consider environmental aspects; c) design of the metadata database; and d) implementation of a prototype of the information system, with emphasis on its metadata aspects.},
author = {Andreia da Silva Fagundes},
date = {1999-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/FagundesAndréiadaSilva.pdf},
school = {Instituto de Computação - Unicamp},
title = {Design and Implementation of a Metadata Database for the Biodiversity Information System of the State of Sao Paulo},
year = {1999}
}
This dissertation presents the design and implementation of the metadata database of the information system for the BIOTA/FAPESP program. This is a long term scientific program that aims the establishment of a common basis for cooperation among different researchers on biodiversity and the dissemination of their work, to give subsidies to the creation of environmental preservation programs in the State of São Paulo. The metadata database is the system component responsible for the high level description of several biodiversity data collected by researchers. This dissertation discusses different aspects of the development of this database, situating it in the context of a biodiversity information system. The main contributions presented are: a) survey of several proposals for metadata standards, for environmental data; b) proposal of a metadata standard for the biodiversity information system that encompasses other proposals and extends them in order to consider environmental aspects; c) design of the metadata database; and d) implementation of a prototype of the information system, with emphasis on its metadata aspects.
|
Seffino, Laura A.;
Medeiros, Claudia Bauzer;
Rocha, Jansle V.;
Yi, Bei
WOODSS - A Spatial Decision Support System based on Workflows (article)
Decision Support Systems,
1-2,
1999.
(
Links |
BibTeX |
Tags:
Article
)
@article{Seffino1999,
author = {Laura A. Seffino and Claudia Bauzer Medeiros and Jansle V. Rocha and Bei Yi},
date = {1999-07-01},
journal = {Decision Support Systems},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/WOODSSASpatialDecisionSupportSystemBasedOnWorkflows.pdf},
note = {Elsevier},
number = {1-2},
pages = {105--123},
title = {WOODSS - A Spatial Decision Support System based on Workflows},
volume = {27},
year = {1999}
}
|
Silva, Jefferson Rodrigues de Oliveira e
Generation and Indexing of Spatio-Temporal Data (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1999.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deeSilva1999,
abstract = {The goal of the dissertation is the design, implementation and evaluation of an access structure for spatiotemporal data. The dissertation is a collection of four papers written in English, with an introduction and a conclusion written in Portuguese. The first paper presents a survey of spatial data indices and traditional data persistent indices. In addition, the paper describes a novel structure, the HR-tree, as well as its algorithms to insert, delete, update and search data. The second paper addresses the development of an algorithm to generate spatiotemporal data, called GSTD (Generate Spatiotemporal Data). The algorithm allows the generation of spatiotemporal data following A few statistical distributions for some user defined parameters, that control, for example, the initial spatial location, the dinamicity of updates (in time) and the spatial data movements. The third paper presents a comparison of the HR-tree to two other structures. The first one is a 3D spatial structure, based on the R-tree, that treats time as another dimension. In that structure, the initial and end time of the objects have to be known beforehand. The second one is basically a structure that combines two spatial structures, also based on the R-tree: a 2D structure that indexes current objects (i.e., objects with an end time unknown) and a 3D structure that indexes objects alread closed (i.e., objects with initial and end time known). The fourth and last paper describes an application of the HR-tree in another problem domain, namely bitemporal data indexing. The overall conclusion of this work is that the HR-tree has the best performance (when compared to the other two structures) to answer spatial queries in a specific point in time and for small time intervals, but the HR-tree is much bigger than the other two structures. However, nowadays space requirements are not as problematic as response time, hence, we believe the HR-tree is a good access structure for spatiotemporal data.},
author = {Jefferson Rodrigues de Oliveira e Silva},
date = {1999-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SilvaJeffersonRodriguesdeOliveirae.pdf},
school = {Instituto de Computação - Unicamp},
title = {Generation and Indexing of Spatio-Temporal Data},
year = {1999}
}
The goal of the dissertation is the design, implementation and evaluation of an access structure for spatiotemporal data. The dissertation is a collection of four papers written in English, with an introduction and a conclusion written in Portuguese. The first paper presents a survey of spatial data indices and traditional data persistent indices. In addition, the paper describes a novel structure, the HR-tree, as well as its algorithms to insert, delete, update and search data. The second paper addresses the development of an algorithm to generate spatiotemporal data, called GSTD (Generate Spatiotemporal Data). The algorithm allows the generation of spatiotemporal data following A few statistical distributions for some user defined parameters, that control, for example, the initial spatial location, the dinamicity of updates (in time) and the spatial data movements. The third paper presents a comparison of the HR-tree to two other structures. The first one is a 3D spatial structure, based on the R-tree, that treats time as another dimension. In that structure, the initial and end time of the objects have to be known beforehand. The second one is basically a structure that combines two spatial structures, also based on the R-tree: a 2D structure that indexes current objects (i.e., objects with an end time unknown) and a 3D structure that indexes objects alread closed (i.e., objects with initial and end time known). The fourth and last paper describes an application of the HR-tree in another problem domain, namely bitemporal data indexing. The overall conclusion of this work is that the HR-tree has the best performance (when compared to the other two structures) to answer spatial queries in a specific point in time and for small time intervals, but the HR-tree is much bigger than the other two structures. However, nowadays space requirements are not as problematic as response time, hence, we believe the HR-tree is a good access structure for spatiotemporal data.
|
Oliveira, Juliano Lopes de;
Goncalves, Marcos Andre;
Medeiros, Claudia Bauzer
A Framework for Designing and Implementing the User Interface of a Geographic Digital Library (article)
International Journal of Digital Libraries,
2-3,
1999.
(
BibTeX |
Tags:
Article
)
@article{deOliveira1999b,
author = {Juliano Lopes de Oliveira and Marcos Andre Goncalves and Claudia Bauzer Medeiros},
date = {1999-01-01},
journal = {International Journal of Digital Libraries},
keyword = {Article},
note = {Springer Verlag},
number = {2-3},
pages = {190--206},
title = {A Framework for Designing and Implementing the User Interface of a Geographic Digital Library},
volume = {2},
year = {1999}
}
|
Oliveira, Juliano Lopes de;
Medeiros, Claudia Bauzer
Techniques, Models and Tools to Support the Construction of User Interfaces of Geographic Applications (conference)
Proceedings of the XIII Brazilian Software Engineering Symposium,
1999.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{deOliveira1999,
author = {Juliano Lopes de Oliveira and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the XIII Brazilian Software Engineering Symposium},
date = {1999-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbes99.pdf},
note = {in portuguese},
title = {Techniques, Models and Tools to Support the Construction of User Interfaces of Geographic Applications},
year = {1999}
}
|
Soares, Hélio Rubens;
Medeiros, Claudia Bauzer
Integrating Legacy Systems to Heterogeneous Databases (conference)
Proceedings of the XIV Brazilian Database Symposium,
SBC,
1999.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Soares1999,
abstract = {This paper presents a methodology to construct a federated database infrastructure to help the integration of heterogeneous data sources and which takes legacy data into account. This methodology considers different kinds of data sources and systems to be combined, and gives guidelines to integrate the data for each situation. The last step of the methodology consists in an algorithm that produces mappings from queries on the federated system to the set of queries on the database that participate in the federation. The methodology was validated by a case study on databases and legacy systems of the municipal administration of Paulinia, SP.},
author = {Hélio Rubens Soares and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the XIV Brazilian Database Symposium},
date = {1999-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbbd99.pdf},
note = {in portuguese},
pages = {411-425},
publisher = {SBC},
title = {Integrating Legacy Systems to Heterogeneous Databases},
year = {1999}
}
This paper presents a methodology to construct a federated database infrastructure to help the integration of heterogeneous data sources and which takes legacy data into account. This methodology considers different kinds of data sources and systems to be combined, and gives guidelines to integrate the data for each situation. The last step of the methodology consists in an algorithm that produces mappings from queries on the federated system to the set of queries on the database that participate in the federation. The methodology was validated by a case study on databases and legacy systems of the municipal administration of Paulinia, SP.
|
Medeiros, Claudia Bauzer;
Alencar, Alexandre Carvalho de
Data Quality and Interoperability in GIS (conference)
Proceedings of Infogeo'99,
1999.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros1999,
abstract = {Interoperability in GIS is an issue of growing importance, due to the increase in number and volume of available data sources and to the exponential expansion of new applications and systems. Research in this area involve solutions directed towards the different layers of an information system (interoperability based on common interface design, process interoperability or interoperability through data). The goal of this paper is to point out issues concerning interoperability at the data level. In particular, the text analyses issues related to the quality of geographic data as an additional dimension that must be taken into consideration in the cases of data migration and integration.},
author = {Claudia Bauzer Medeiros and Alexandre Carvalho de Alencar},
booktitle = {Proceedings of Infogeo'99},
date = {1999-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/infogeo.pdf},
note = {in portuguese},
title = {Data Quality and Interoperability in GIS},
year = {1999}
}
Interoperability in GIS is an issue of growing importance, due to the increase in number and volume of available data sources and to the exponential expansion of new applications and systems. Research in this area involve solutions directed towards the different layers of an information system (interoperability based on common interface design, process interoperability or interoperability through data). The goal of this paper is to point out issues concerning interoperability at the data level. In particular, the text analyses issues related to the quality of geographic data as an additional dimension that must be taken into consideration in the cases of data migration and integration.
|
Goncalves, Marcos Andre;
Medeiros, Claudia Bauzer
Constructing Geographic Digital Libraries using a Hypermedia Framework (article)
Multimedia Tools and Applications,
3,
1999.
(
BibTeX |
Tags:
Article
)
@article{Goncalves1999,
author = {Marcos Andre Goncalves and Claudia Bauzer Medeiros},
date = {1999-01-01},
journal = {Multimedia Tools and Applications},
keyword = {Article},
note = {extends Portuguese version published in 1998},
number = {3},
pages = {341--356},
title = {Constructing Geographic Digital Libraries using a Hypermedia Framework},
volume = {8},
year = {1999}
}
|
1998 |
Soares, Helio Rubens
A Methodology to Integrate Legacy Systems and Heterogeneous Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Soares1998,
abstract = {Applications increasingly need to access different data sources to get information. Many of these sources are managed by legacy systems, and need to be integrated or migrated to become more flexible and manageable. This work proposes a methodology to help the integration of these heterogeneous data sources and which takes legacy data into account, considering the features of each system. The methodology takes into account several factors that help the choice of the better solution to be applied on each case, and an algorithm to design the federated system and to process queries using this system. The proposed methodology was validated by a case study on databases and legacy systems for a municipal administration system for the city of Paulinia, SP.},
author = {Helio Rubens Soares},
date = {1998-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SoaresHelioRubens_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Methodology to Integrate Legacy Systems and Heterogeneous Databases},
year = {1998}
}
Applications increasingly need to access different data sources to get information. Many of these sources are managed by legacy systems, and need to be integrated or migrated to become more flexible and manageable. This work proposes a methodology to help the integration of these heterogeneous data sources and which takes legacy data into account, considering the features of each system. The methodology takes into account several factors that help the choice of the better solution to be applied on each case, and an algorithm to design the federated system and to process queries using this system. The proposed methodology was validated by a case study on databases and legacy systems for a municipal administration system for the city of Paulinia, SP.
|
Nascimento, Mario;
Silva, Jefferson R. O.;
Theodoridis, Yannis
Access Structure for Moving Points (Technical Report)
TimeCenter,
Technical Report,
TR-33,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Nascimento1998b,
abstract = {Several applications require management of data which is spatially dynamic, e.g., tracking of battle ships or moving cells in a blood sample. The capability of handling the temporal aspect, i.e., the history of such type of data, is also important. This paper presents and evaluates three temporal extensions of the R-tree, the 3D R-tree, the 2+3 R-tree and the HR-tree, which are capable of indexing spatiotemporal data. Our experiments have shown that the while the HR-tree was the larger structure, its query processing cost was over 50% smaller than the ones yielded by the 3D R-tree and the 2+3 R-tree. Also compared to the (non-practical) approach of storing one R-tree for each of the spatial database states it offered the same query processing cost, saving around one third of storage space.},
author = {Mario Nascimento and Jefferson R. O. Silva and Yannis Theodoridis},
date = {1998-09-01},
institution = {TimeCenter},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/nascimento98access.pdf},
number = {TR-33},
title = {Access Structure for Moving Points},
type = {Technical Report},
year = {1998}
}
Several applications require management of data which is spatially dynamic, e.g., tracking of battle ships or moving cells in a blood sample. The capability of handling the temporal aspect, i.e., the history of such type of data, is also important. This paper presents and evaluates three temporal extensions of the R-tree, the 3D R-tree, the 2+3 R-tree and the HR-tree, which are capable of indexing spatiotemporal data. Our experiments have shown that the while the HR-tree was the larger structure, its query processing cost was over 50% smaller than the ones yielded by the 3D R-tree and the 2+3 R-tree. Also compared to the (non-practical) approach of storing one R-tree for each of the spatial database states it offered the same query processing cost, saving around one third of storage space.
|
Seffino, Laura Andrea
WOODSS - Spatial Decision Support System based on Workflows (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Seffino1998,
abstract = {Environmental planning takes nowadays advantage of Geographic Information Systems (GIS) to manage geo-spatial data. Nevertheless, GIS do not provide facilities to reuse users ́expertise in solving problems. This dissertation provides a solution to this limitation, specifying and implementing a Spatial Decision Support System. The user interactions with GIS are intercepted by WOODSS, which documents them as scientific workflows. These workflows can be edited and re-executed directly in the GIS. WOODSS thus allows documenting and repeating planning activities, as well as creating new planning strategies. It was implemented on top of the IDRISI software, and tested in the context of agro-environmental planning activities.},
author = {Laura Andrea Seffino},
date = {1998-07-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/SeffinoLauraAndrea.pdf},
school = {Instituto de Computação - Unicamp},
title = {WOODSS - Spatial Decision Support System based on Workflows},
year = {1998}
}
Environmental planning takes nowadays advantage of Geographic Information Systems (GIS) to manage geo-spatial data. Nevertheless, GIS do not provide facilities to reuse users ́expertise in solving problems. This dissertation provides a solution to this limitation, specifying and implementing a Spatial Decision Support System. The user interactions with GIS are intercepted by WOODSS, which documents them as scientific workflows. These workflows can be edited and re-executed directly in the GIS. WOODSS thus allows documenting and repeating planning activities, as well as creating new planning strategies. It was implemented on top of the IDRISI software, and tested in the context of agro-environmental planning activities.
|
Goncalves, Marcos Andre;
Medeiros, Claudia Bauzer
Constructing Geographic Digital Libraries using a Hypermedia Framework (conference)
Proceedings of the Brazilian Symposium on Multimedia and Hypermedia Systems,
1998.
(
BibTeX |
Tags:
Conference
)
@conference{Goncalves1998,
author = {Marcos Andre Goncalves and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the Brazilian Symposium on Multimedia and Hypermedia Systems},
date = {1998-05-01},
keyword = {Conference},
note = {in portuguese},
title = {Constructing Geographic Digital Libraries using a Hypermedia Framework},
year = {1998}
}
|
Carneiro, Alexandre Pedrosa
Use of Urban Geographic Data in the Comparison of Spatial Access Methods (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Carneiro1998,
abstract = {This dissertation presents a performance analysis of spatial access methods based on a real life database. In spite of the large amount of research dealing with the performance comparison of spatial access methods, very little has been done when it comes to considering the properties of specific groups of applications. In part, this is due to the difficulty in obtaining real data sets to represent these applications. The use of real data is necessary, since synthetic data generation may result in data sets with atypical characteristics, leading in turn to conclusions that may not be generally applied. In this context, the main contributions of this work are: - the conversion of a real data set that is representative of geographic applica tions for public utility services management to a format in which it may be easily delivered to other researchers. Public utility services include telecommunica tion, electricity and water supply, and the like. - the performance comparison of a group of spatial access methods of the R-tree family with regard to the indexing of these data.},
author = {Alexandre Pedrosa Carneiro},
date = {1998-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/CarneiroAlexandrePedrosa_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Use of Urban Geographic Data in the Comparison of Spatial Access Methods},
year = {1998}
}
This dissertation presents a performance analysis of spatial access methods based on a real life database. In spite of the large amount of research dealing with the performance comparison of spatial access methods, very little has been done when it comes to considering the properties of specific groups of applications. In part, this is due to the difficulty in obtaining real data sets to represent these applications. The use of real data is necessary, since synthetic data generation may result in data sets with atypical characteristics, leading in turn to conclusions that may not be generally applied. In this context, the main contributions of this work are: - the conversion of a real data set that is representative of geographic applica tions for public utility services management to a format in which it may be easily delivered to other researchers. Public utility services include telecommunica tion, electricity and water supply, and the like. - the performance comparison of a group of spatial access methods of the R-tree family with regard to the indexing of these data.
|
Goncalves, Marcos Andre;
Medeiros, Claudia Bauzer
Initiatives That Center on Scientific Dissemination (article)
ACM,
New York, NY, USA,
Commun. ACM,
4,
1998.
(
Links |
BibTeX |
Tags:
)
@article{Goncalves1998b,
acmid = {273063},
address = {New York, NY, USA},
author = {Marcos Andre Goncalves and Claudia Bauzer Medeiros},
date = {1998-04-01},
doi = {10.1145/273035.273063},
issn = {0001-0782},
issue = {April 1998},
journal = {Commun. ACM},
link = {http://doi.acm.org/10.1145/273035.273063},
month = {apr},
number = {4},
numpages = {2},
pages = {80--81},
publisher = {ACM},
title = {Initiatives That Center on Scientific Dissemination},
volume = {41},
year = {1998}
}
|
Faria, Glaucia;
Medeiros, Claudia Bauzer;
Nascimento, Mario A.
An Extensible Framework for Temporal Database Applications (Technical Report)
TimeCenter,
Technical Report,
TR-27,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Faria1998b,
abstract = {There is a wide range of scientific application domains requiring sophisticated management of spatio-temporal data. However, existing database management systems offer very limited (if any at all) support for managing such data. Thus, it is left to the researchers themselves to repeatedly code this management into each application. Besides being a time consuming task, this process is bound to introduce errors and increase the complexity of application management and data evolution. This paper addresses this very point. We present an extensible framework, based on extending an object-oriented database system, with kernel spatio-temporal classes, data structures and functions, to provide support for the development of spatio-temporal applications. Even though the paper’s arguments are centered on geographic applications, the proposed framework can be used in other application domains where spatial and temporal data evolution must be considered (e.g., Biology).},
author = {Glaucia Faria and Claudia Bauzer Medeiros and Mario A. Nascimento},
date = {1998-04-01},
institution = {TimeCenter},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/faria98extensible.pdf},
number = {TR-27},
title = {An Extensible Framework for Temporal Database Applications},
type = {Technical Report},
year = {1998}
}
There is a wide range of scientific application domains requiring sophisticated management of spatio-temporal data. However, existing database management systems offer very limited (if any at all) support for managing such data. Thus, it is left to the researchers themselves to repeatedly code this management into each application. Besides being a time consuming task, this process is bound to introduce errors and increase the complexity of application management and data evolution. This paper addresses this very point. We present an extensible framework, based on extending an object-oriented database system, with kernel spatio-temporal classes, data structures and functions, to provide support for the development of spatio-temporal applications. Even though the paper’s arguments are centered on geographic applications, the proposed framework can be used in other application domains where spatial and temporal data evolution must be considered (e.g., Biology).
|
Faria, Glaucia
A Spatio-Temporal Database for Development of Applications in Geographic Information Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Faria1998,
abstract = {Abstract This dissertation discusses the implementation of an extensible framework, which provides support for the development of spatio-temporal database applications. The infrastructure, developed on the O2 object-oriented database system, consists of a kernel set of operators and database classes, which meet the minimum requirements for the processing of spatial, temporal and spatio-temporal queries. The main contributions of this work are the specification of the kernel operators and classes and their implementation, validated through a pilot geographic application. Another contribution is the analysis of this implementation, which discusses problems and shortcomings of some models proposed in the literature.},
author = {Glaucia Faria},
date = {1998-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/Tese.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Spatio-Temporal Database for Development of Applications in Geographic Information Systems},
year = {1998}
}
Abstract This dissertation discusses the implementation of an extensible framework, which provides support for the development of spatio-temporal database applications. The infrastructure, developed on the O2 object-oriented database system, consists of a kernel set of operators and database classes, which meet the minimum requirements for the processing of spatial, temporal and spatio-temporal queries. The main contributions of this work are the specification of the kernel operators and classes and their implementation, validated through a pilot geographic application. Another contribution is the analysis of this implementation, which discusses problems and shortcomings of some models proposed in the literature.
|
Weske, M.;
Vossen, G.;
Medeiros, Claudia Bauzer;
Pires, F.
Workflow Management in Geoprocessing Scientific Applications (Technical Report)
Universitat Munster,
Technical Report,
No. 04/98-1. IAI,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Weske1998b,
abstract = {This paper presents a system which is being developed in the University of Münster to support scientific application environments. This system -- WASA -- is based on taking advantage of workflows to document and monitor the execu tion of scientific applications. A geoprocessing application is used throughout the paper to illustrate and justify the specificity of the problem and our proposed solution.},
author = {M. Weske and G. Vossen and Claudia Bauzer Medeiros and F. Pires},
date = {1998-02-01},
institution = {Universitat Munster},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/weske98workflow.pdf},
number = {No. 04/98-1. IAI},
title = {Workflow Management in Geoprocessing Scientific Applications},
type = {Technical Report},
year = {1998}
}
This paper presents a system which is being developed in the University of Münster to support scientific application environments. This system -- WASA -- is based on taking advantage of workflows to document and monitor the execu tion of scientific applications. A geoprocessing application is used throughout the paper to illustrate and justify the specificity of the problem and our proposed solution.
|
Weske, Mathias;
Vossen, Gottfried;
Medeiros, Claudia Bauzer;
Pires, Fatima
Workflow Management in Geoprocessing Applications (conference)
Proceedings of the 6th International Symposium on Advances in Geographic Information Systems (ACMGIS98),
ACM,
1998.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Weske1998b,
author = {Mathias Weske and Gottfried Vossen and Claudia Bauzer Medeiros and Fatima Pires},
booktitle = {Proceedings of the 6th International Symposium on Advances in Geographic Information Systems (ACMGIS98)},
date = {1998-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/wasa98.pdf},
note = {(extended version published as Technical Report No. 04/98-1. IAI - Universitat Munster)},
pages = {88-93},
publisher = {ACM},
title = {Workflow Management in Geoprocessing Applications},
year = {1998}
}
|
Weske, M.;
Vossen, G.;
Medeiros, C. B.;
Pires, F.
Workflow Management in Geoprocessing Applications (conference)
Proc. ACMGIS98,
1998.
(
BibTeX |
Tags:
Conference
)
@conference{Weske1998,
author = {M. Weske and G. Vossen and C. B. Medeiros and F. Pires},
booktitle = {Proc. ACMGIS98},
date = {1998-01-01},
keyword = {Conference},
note = {Extended Version: Fachbericht Angewandte Mathematik und Informatik 04-98.1,Universität Münster, 1998},
title = {Workflow Management in Geoprocessing Applications},
year = {1998}
}
|
Salles, Marcos Antonio Vaz;
Pires, Fatima;
Medeiros, Claudia Bauzer;
Oliveira, Juliano Lopes de
Development of a Computer Aided Geographic Database Design System (conference)
Proceedings of the XIII Brazilian Database Symposium (SBBD),
1998.
(
BibTeX |
Tags:
Conference
)
@conference{Salles1998,
author = {Marcos Antonio Vaz Salles and Fatima Pires and Claudia Bauzer Medeiros and Juliano Lopes de Oliveira},
booktitle = {Proceedings of the XIII Brazilian Database Symposium (SBBD)},
date = {1998-01-01},
keyword = {Conference},
pages = {235-250},
title = {Development of a Computer Aided Geographic Database Design System},
year = {1998}
}
|
Nascimento, Mario A.;
Silva, Jefferson Rodrigues de Oliveira e
Towards Historical R-trees (conference)
Proceedings of the Symposium on Applied Computing,
1998.
(
BibTeX |
Tags:
Conference
)
@conference{Nascimento1998,
author = {Mario A. Nascimento and Jefferson Rodrigues de Oliveira e Silva},
booktitle = {Proceedings of the Symposium on Applied Computing},
date = {1998-01-01},
keyword = {Conference},
title = {Towards Historical R-trees},
year = {1998}
}
|
Faria, Glaucia;
Medeiros, Claudia Bauzer;
Nascimento, Mario A.
An Extensible Framework for Temporal Scientific Database Applications (conference)
Proceedings of the 10th IEEE SSDBM,
Capri, Italy,
1998.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Faria1998b,
abstract = {There is a wide range of scientific application domains requiring sophisticated management of spatio-temporal data. However, existing database management systems offer very limited (if any at all) support for managing such data. Thus, it is left to the researchers themselves to repeatedly code this management into each application. Besides being a time consuming task, this process is bound to introduce errors and increase the complexity of application management and data evolution. This paper addresses this very point. We present an extensible framework, based on extending an object-oriented database system, with kernel spatio-temporal classes, data structures and functions, to provide support for the development of spatio-temporal applications. Even though the paper’s arguments are centered on geographic applications, the proposed framework can be used in other application domains where spatial and temporal data evolution must be considered (e.g., Biology).},
address = {Capri, Italy},
author = {Glaucia Faria and Claudia Bauzer Medeiros and Mario A. Nascimento},
booktitle = {Proceedings of the 10th IEEE SSDBM},
date = {1998-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/timecenter-1.pdf},
note = {Extended version available as TimeCenter Technical Report TR-27},
title = {An Extensible Framework for Temporal Scientific Database Applications},
year = {1998}
}
There is a wide range of scientific application domains requiring sophisticated management of spatio-temporal data. However, existing database management systems offer very limited (if any at all) support for managing such data. Thus, it is left to the researchers themselves to repeatedly code this management into each application. Besides being a time consuming task, this process is bound to introduce errors and increase the complexity of application management and data evolution. This paper addresses this very point. We present an extensible framework, based on extending an object-oriented database system, with kernel spatio-temporal classes, data structures and functions, to provide support for the development of spatio-temporal applications. Even though the paper’s arguments are centered on geographic applications, the proposed framework can be used in other application domains where spatial and temporal data evolution must be considered (e.g., Biology).
|
1997 |
Oliveira, Juliano Lopes de;
Goncalves, Marcos Andre;
Medeiros, Claudia Bauzer
Designing and Implementing the User Interface of Geographic Digital Libraries (Technical Report)
IC-UNICAMP,
Technical Report,
IC-97-25,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deOliveira1997b,
abstract = {Geographic data are useful for a large set of applications, such as urban planning and environmental control. These data are, however, very expensive to acquire and maintain. Moreover, their use is often restricted, for lack of dissemination mechanisms. Digital libraries are a good approach for increasing data availability and therefore re ducing cost, since they provide efficient storage and access to large volumes of data. Geographic applications can diminish their costs by reusing and sharing data through Geographic Digital Libraries (gdl). One major drawback to this approach is that it creates the necessity of providing facilities for a large and heterogeneous community of users to search and interact with these Geographic Libraries. We present a solu tion for this problem, based on a framework that allows the design and construction of customizable user interfaces for gdl applications. This framework relies on two main concepts: a Geographic User Interface Architecture and a Geographic Digital Library.},
author = {Juliano Lopes de Oliveira and Marcos Andre Goncalves and Claudia Bauzer Medeiros},
date = {1997-12-01},
institution = {IC-UNICAMP},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/97-25.ps},
number = {IC-97-25},
title = {Designing and Implementing the User Interface of Geographic Digital Libraries},
type = {Technical Report},
year = {1997}
}
Geographic data are useful for a large set of applications, such as urban planning and environmental control. These data are, however, very expensive to acquire and maintain. Moreover, their use is often restricted, for lack of dissemination mechanisms. Digital libraries are a good approach for increasing data availability and therefore re ducing cost, since they provide efficient storage and access to large volumes of data. Geographic applications can diminish their costs by reusing and sharing data through Geographic Digital Libraries (gdl). One major drawback to this approach is that it creates the necessity of providing facilities for a large and heterogeneous community of users to search and interact with these Geographic Libraries. We present a solu tion for this problem, based on a framework that allows the design and construction of customizable user interfaces for gdl applications. This framework relies on two main concepts: a Geographic User Interface Architecture and a Geographic Digital Library.
|
Oliveira, Juliano Lopes de
Design and Implementation of User Interfaces for Geographic Applications Systems (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{deOliveira1997,
abstract = {This thesis presents a framework of techniques and models to support the design and implementation of user-interfaces for geographic information systems (GIS). The proposal combines concepts from three areas of computer science -- Databases, Software Engineering and Human-Computer Interfaces -- in a innovative perspective, considering interactions not only with the user, but also with the underlying software. The framework covers both the architecture of the interface and the mechanisms for its construction. The basis of the interface-GIS integration is an object-oriented geographic database. The presented solution can be mapped to most of the existing interface development tools. The main results of the thesis are: a software architecture for the design and implementation of user-interfaces for geographic applications systems; an interface objects model for building user-interfaces which can be modified at run-time (dynamic interfaces); an interface customization mechanism based on active databases; and the creation of reusable interface components geared towards geographic applications. The techniques and tools introduced in this thesis were applied on the design and implementation of user-interfaces for two geographic applications systems, in urban and environmental areas. The results of this experience showed that this work contributes to diminishing the costs and improving the efficiency of the development of geographic interfaces.},
author = {Juliano Lopes de Oliveira},
date = {1997-12-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/OliveiraJulianoLopesde.pdf},
school = {Instituto de Computação - Unicamp},
title = {Design and Implementation of User Interfaces for Geographic Applications Systems},
year = {1997}
}
This thesis presents a framework of techniques and models to support the design and implementation of user-interfaces for geographic information systems (GIS). The proposal combines concepts from three areas of computer science -- Databases, Software Engineering and Human-Computer Interfaces -- in a innovative perspective, considering interactions not only with the user, but also with the underlying software. The framework covers both the architecture of the interface and the mechanisms for its construction. The basis of the interface-GIS integration is an object-oriented geographic database. The presented solution can be mapped to most of the existing interface development tools. The main results of the thesis are: a software architecture for the design and implementation of user-interfaces for geographic applications systems; an interface objects model for building user-interfaces which can be modified at run-time (dynamic interfaces); an interface customization mechanism based on active databases; and the creation of reusable interface components geared towards geographic applications. The techniques and tools introduced in this thesis were applied on the design and implementation of user-interfaces for two geographic applications systems, in urban and environmental areas. The results of this experience showed that this work contributes to diminishing the costs and improving the efficiency of the development of geographic interfaces.
|
Goncalves, Marcos Andre
Using hypermedia models in digital libraries for geographic data (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Goncalves1997,
abstract = {This dissertation presents a model and a methodology for the construction of digital libraries. A digital library was here considered to be a hypermedia application, based on a Object Oriented Hypermedia DBMS environment. Model and methodology were used to model a specific application -- a Geographic Digital Library, whose goal is to collect and provide access to a large volume of geographic and conventional data. The construction of this library demanded the definition of a special set of metadata, which aggregates several existing standards. The geographic digital library contemplates two modes of interaction: browsing (in the traditional sense) and querying (supported by the underlying DBMS). The model integrates the OOHDM database model of Milet et al, with the Extended Dexter model, and applies extensions to this integration. The methodology extends the proposal of OODHM, adapting it to allow modelling of digital libraries. The main contributions of the dissertation are: (a) A detailed survey of requirements of digital libraries, and of hipermedia data and authoring models, presented in a unified taxonomy; (b) An object oriented hipermedia model for digital libraries; (c) A methodology which uses the model for construction of such libraries; and (d) Detailed specification of how to build geographic digital libraries, using model and methodology.},
author = {Marcos Andre Goncalves},
date = {1997-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/GonçalvesMarcosAndre.pdf},
school = {Instituto de Computação - Unicamp},
title = {Using hypermedia models in digital libraries for geographic data},
year = {1997}
}
This dissertation presents a model and a methodology for the construction of digital libraries. A digital library was here considered to be a hypermedia application, based on a Object Oriented Hypermedia DBMS environment. Model and methodology were used to model a specific application -- a Geographic Digital Library, whose goal is to collect and provide access to a large volume of geographic and conventional data. The construction of this library demanded the definition of a special set of metadata, which aggregates several existing standards. The geographic digital library contemplates two modes of interaction: browsing (in the traditional sense) and querying (supported by the underlying DBMS). The model integrates the OOHDM database model of Milet et al, with the Extended Dexter model, and applies extensions to this integration. The methodology extends the proposal of OODHM, adapting it to allow modelling of digital libraries. The main contributions of the dissertation are: (a) A detailed survey of requirements of digital libraries, and of hipermedia data and authoring models, presented in a unified taxonomy; (b) An object oriented hipermedia model for digital libraries; (c) A methodology which uses the model for construction of such libraries; and (d) Detailed specification of how to build geographic digital libraries, using model and methodology.
|
Muinhos, Sergio;
Carvalho, Ariadne M. B. R.;
Medeiros, Claudia Bauzer
Implementation of a natural language translator system for GIS (conference)
Proceedings of the Latin American Informatics Conference,
Santiago, Chile,
1997.
(
BibTeX |
Tags:
Conference
)
@conference{Muinhos1997,
address = {Santiago, Chile},
author = {Sergio Muinhos and Ariadne M. B. R. Carvalho and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the Latin American Informatics Conference},
date = {1997-11-01},
keyword = {Conference},
note = {In portuguese.},
title = {Implementation of a natural language translator system for GIS},
year = {1997}
}
|
Cura, L. M. del Val;
Medeiros, Claudia Bauzer
Versions in Databases for GIS (conference)
Proceedings of the XII Brazilian Symposium on Database Systems,
Fortaleza, Brazil,
1997.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{delCura1997,
address = {Fortaleza, Brazil},
author = {L. M. del Val Cura and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the XII Brazilian Symposium on Database Systems},
date = {1997-10-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/sbbd97.pdf},
note = {In portuguese},
title = {Versions in Databases for GIS},
year = {1997}
}
|
Paques, Henrique Wiermann
An Object-Oriented Method for Developing Distributed Information Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Paques1997,
abstract = {With the increasing availability of distributed technologies, large companies have been pursuing more and more the development of distributed information system. However, there is a lack of methods that consider the distribution aspect from its initial phase (requirement analysis) to its final phase (implementation). Indeed distributed architecture specifications (e.g., OMG's CORBA) provide support only to the activities related to the software implementation process (the analysis process is not considered). This work presents an object-oriented (OO) method for developing distributed information systems which integrates concepts used on conceptual models of OO methods with concepts used in distributed architecture specifications. This integration provides a better usage of nowaday's distributed technology (e.g., distributed databases, internet, intranet, etc.). During the analysis phase of this method, objects are grouped in subsystems based on the affinity that exists among them. This grouping process is conducted in order to induce better performance for the distributed information system. Finally, the work proposes a tool that automates the object grouping process.},
author = {Henrique Wiermann Paques},
date = {1997-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/PaquesHenriqueWiermann.pdf},
school = {Instituto de Computação - Unicamp},
title = {An Object-Oriented Method for Developing Distributed Information Systems},
year = {1997}
}
With the increasing availability of distributed technologies, large companies have been pursuing more and more the development of distributed information system. However, there is a lack of methods that consider the distribution aspect from its initial phase (requirement analysis) to its final phase (implementation). Indeed distributed architecture specifications (e.g., OMG's CORBA) provide support only to the activities related to the software implementation process (the analysis process is not considered). This work presents an object-oriented (OO) method for developing distributed information systems which integrates concepts used on conceptual models of OO methods with concepts used in distributed architecture specifications. This integration provides a better usage of nowaday's distributed technology (e.g., distributed databases, internet, intranet, etc.). During the analysis phase of this method, objects are grouped in subsystems based on the affinity that exists among them. This grouping process is conducted in order to induce better performance for the distributed information system. Finally, the work proposes a tool that automates the object grouping process.
|
Oliveira, J. L.;
Cereja, N.;
Medeiros, Claudia Bauzer
Interface Intermediate Model for Geographic Information Systems (conference)
Proceedings of GIS Brazil' 97,
Curitiba, PR, Brazil,
1997.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1997,
address = {Curitiba, PR, Brazil},
author = {J. L. Oliveira and N. Cereja and Claudia Bauzer Medeiros},
booktitle = {Proceedings of GIS Brazil' 97},
date = {1997-05-01},
keyword = {Conference},
note = {in portuguese},
title = {Interface Intermediate Model for Geographic Information Systems},
year = {1997}
}
|
Medeiros, Claudia Bauzer;
Gonçalves, M. A.
Digital Libraries for Geographic Data (conference)
Proceedings of GIS Brazil' 97,
Curitiba, PR, Brazil,
1997.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1997,
address = {Curitiba, PR, Brazil},
author = {Claudia Bauzer Medeiros and M. A. Gonçalves},
booktitle = {Proceedings of GIS Brazil' 97},
date = {1997-05-01},
keyword = {Conference},
note = {in portuguese},
title = {Digital Libraries for Geographic Data},
year = {1997}
}
|
Costenaro, Walter Paulo
Characterization of Spatial Database Systems for Performance Analysis (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Costenaro1997,
abstract = {In this thesis, the area of spatial database systems is approached using the benchmark technique for performance analysis. This technique requires the monitoring of a database system using a real or synthetic database and workload (transactions). The ideal situation is the use of synthetic data that better resemble the situations found in real applications. This thesis uses real data (spatial and non-spatial) of a telecommunications outside plant management system to validate and enhance techniques to provide more realistic synthetic data and workload and to derive conclusions useful for performance studies of database systems in general.},
author = {Walter Paulo Costenaro},
date = {1997-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/CostenaroWalterPaulo_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Characterization of Spatial Database Systems for Performance Analysis},
year = {1997}
}
In this thesis, the area of spatial database systems is approached using the benchmark technique for performance analysis. This technique requires the monitoring of a database system using a real or synthetic database and workload (transactions). The ideal situation is the use of synthetic data that better resemble the situations found in real applications. This thesis uses real data (spatial and non-spatial) of a telecommunications outside plant management system to validate and enhance techniques to provide more realistic synthetic data and workload and to derive conclusions useful for performance studies of database systems in general.
|
Cura, Luis Mariano Delval
Version Management in Databases for GIS (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Cura1997,
abstract = {Information systems contemplate version models and mechanisms for the management of multiple states of modeled entities. Versions are associated, mainly, to the management of alternatives in CAD/CASE systems and the representation of historical evolution of entities in temporal systems. This dissertation studies the use of versions in Geographic Information Systems (GIS). The focus of this work is on temporal applications, multiple representations of spatial entities, and the management of alternatives of spatial design. The main results presented are: a model and a mechanism for versions in order to support geographic applications; and the proposal of an extension to a standard OODBMS to support the model.},
author = {Luis Mariano Delval Cura},
date = {1997-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/DelValCuraLuisMariano_M2.pdf},
school = {Instituto de Computação - Unicamp},
title = {Version Management in Databases for GIS},
year = {1997}
}
Information systems contemplate version models and mechanisms for the management of multiple states of modeled entities. Versions are associated, mainly, to the management of alternatives in CAD/CASE systems and the representation of historical evolution of entities in temporal systems. This dissertation studies the use of versions in Geographic Information Systems (GIS). The focus of this work is on temporal applications, multiple representations of spatial entities, and the management of alternatives of spatial design. The main results presented are: a model and a mechanism for versions in order to support geographic applications; and the proposal of an extension to a standard OODBMS to support the model.
|
Oliveira, Juliano Lopes de;
Medeiros, Claudia Bauzer;
Cilia, Mariano Ariel
Active Customization of GIS User Interfaces (conference)
Proceedings of the International Conference on Data Engineering (ICDE'97),
Birmingham, UK,
1997.
(
BibTeX |
Tags:
Conference
)
@conference{deOliveira1997b,
address = {Birmingham, UK},
author = {Juliano Lopes de Oliveira and Claudia Bauzer Medeiros and Mariano Ariel Cilia},
booktitle = {Proceedings of the International Conference on Data Engineering (ICDE'97)},
date = {1997-01-01},
keyword = {Conference},
title = {Active Customization of GIS User Interfaces},
year = {1997}
}
|
Pires, Fatima
A Computational Environment for Modeling Environmental Applications (phdthesis)
Instituto de Computação - Unicamp,
phdthesis,
1997.
(
Abstract |
Links |
BibTeX |
Tags:
PhDThesis
)
@phdthesis{Pires1997,
abstract = {Geographic applications are intrinsically complex due to the nature of the data manipulated and also due to the processes acting over these data. Today, many of these applciations are built on top of a GIS (Geographic Information SYstem), a software that provides efficient storage, analysis and presentation tools for spatial data. Nevertheless, GISs present limitatinos that prevent users from taking full advantage of available GIS tools. These limitatins are mainly related to their interface and modeling features and also to the fact that end-users are experts in their application domain but do not have the adequate background in software engineering or database design. This thesis is a contribution to solve these two limitations, presenting UAPE - an environment for modeling and designing geographic applictations. With the environment, users will be able to design appications according to their needs, abstracting the implementation details related to the underlying GIS. The major contributions are: (a) an object-oriented model, GMOD, which supports both data and process modeling, (b) a methodology for environmental application design, and (c) and environment, UAPE, that integrates model and methodology in order to help users in environmental application modeling and design.},
author = {Fatima Pires},
date = {1997-01-01},
keyword = {PhDThesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SilvaMariadeFatimaR.O.Pires_D.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Computational Environment for Modeling Environmental Applications},
year = {1997}
}
Geographic applications are intrinsically complex due to the nature of the data manipulated and also due to the processes acting over these data. Today, many of these applciations are built on top of a GIS (Geographic Information SYstem), a software that provides efficient storage, analysis and presentation tools for spatial data. Nevertheless, GISs present limitatinos that prevent users from taking full advantage of available GIS tools. These limitatins are mainly related to their interface and modeling features and also to the fact that end-users are experts in their application domain but do not have the adequate background in software engineering or database design. This thesis is a contribution to solve these two limitations, presenting UAPE - an environment for modeling and designing geographic applictations. With the environment, users will be able to design appications according to their needs, abstracting the implementation details related to the underlying GIS. The major contributions are: (a) an object-oriented model, GMOD, which supports both data and process modeling, (b) a methodology for environmental application design, and (c) and environment, UAPE, that integrates model and methodology in order to help users in environmental application modeling and design.
|
1996 |
Souza, Cid Carvalho de;
Medeiros, Claudia Bauzer;
Perreira, Ricardo Scachetti
Integrating Heuristics and Spatial Databases - a case study (Technical Report)
IC-UNICAMP,
Technical Report,
IC-96-18,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deSouza1996,
abstract = {This paper presents part of the ongoing efforts at icunicamp to apply heuristic algorithms to vectorial georeferenced data in order to help decision support in urban planning. The results reported are original in the sense that they combine recent re search in both combinatorial algorithm development and geographic databases, using them in the solution of a practical problem. A first prototype, described in the paper, has already been developed and tested against real data on the city of Campinas, to support planning activities for the São Paulo State Post Office System, Brazil.},
author = {Cid Carvalho de Souza and Claudia Bauzer Medeiros and Ricardo Scachetti Perreira},
date = {1996-12-01},
institution = {IC-UNICAMP},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/96-18.ps},
number = {IC-96-18},
title = {Integrating Heuristics and Spatial Databases - a case study},
type = {Technical Report},
year = {1996}
}
This paper presents part of the ongoing efforts at icunicamp to apply heuristic algorithms to vectorial georeferenced data in order to help decision support in urban planning. The results reported are original in the sense that they combine recent re search in both combinatorial algorithm development and geographic databases, using them in the solution of a practical problem. A first prototype, described in the paper, has already been developed and tested against real data on the city of Campinas, to support planning activities for the São Paulo State Post Office System, Brazil.
|
Cereja, Nevton
Views in GIS - a model and mechanisms (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Cereja1996,
abstract = {This thesis analyses the functionality offered by view mechanisms in order to satisfy specific GIS needs. The main results presented are: (1) a detailed analysis of the role views can play in the GIS context; (2) the specification of an object oriented view model to be used in GIS, which shows the need for additional data and semantic information in order to support the required functionality; (3) the presentation of a mechanism to support the model; and (4) a language to specify views in this model. The work developed is validated through the modelling of a real world application using the model and language proposed.},
author = {Nevton Cereja},
date = {1996-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/CerejaNevton_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Views in GIS - a model and mechanisms},
year = {1996}
}
This thesis analyses the functionality offered by view mechanisms in order to satisfy specific GIS needs. The main results presented are: (1) a detailed analysis of the role views can play in the GIS context; (2) the specification of an object oriented view model to be used in GIS, which shows the need for additional data and semantic information in order to support the required functionality; (3) the presentation of a mechanism to support the model; and (4) a language to specify views in this model. The work developed is validated through the modelling of a real world application using the model and language proposed.
|
Oliveira, Juliano L.;
Medeiros, Claudia Bauzer
User Interface Architectures, Languages and Models in Geographic Databases (conference)
In Proceedings of 11th Brazilian Database Symposium,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1996,
author = {Juliano L. Oliveira and Claudia Bauzer Medeiros},
booktitle = {In Proceedings of 11th Brazilian Database Symposium},
date = {1996-10-01},
keyword = {Conference},
title = {User Interface Architectures, Languages and Models in Geographic Databases},
year = {1996}
}
|
Medeiros, Claudia Bauzer;
Cilia, M.
Combining Active Databases and GIS to Maintain Topological Constraints (conference)
In Proceedings of 11th Brazilian Database Symposium,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1996b,
author = {Claudia Bauzer Medeiros and M. Cilia},
booktitle = {In Proceedings of 11th Brazilian Database Symposium},
date = {1996-10-01},
keyword = {Conference},
note = {In portuguese},
title = {Combining Active Databases and GIS to Maintain Topological Constraints},
year = {1996}
}
|
Medeiros, Claudia Bauzer;
Bellosta, M-J;
Jomier, G.
Managing Multiple Representations of Georeferenced Elements (conference)
In Proceedings of the DEXA '96,
1996.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros1996b,
author = {Claudia Bauzer Medeiros and M-J Bellosta and G. Jomier},
booktitle = {In Proceedings of the DEXA '96},
date = {1996-09-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/dexa96.pdf},
title = {Managing Multiple Representations of Georeferenced Elements},
year = {1996}
}
|
Oliveira, Juliano Lopes de;
Medeiros, Claudia Bauzer
User Interface Issues in Geographic Information Systems (Technical Report)
IC/Unicamp,
Technical Report,
IC-96-06,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deOliveira1996,
abstract = {Recently, much research effort has been employed in the area of Geographic Infor mation Systems due to the vast potential for applications of this technology. Simul taneously, user interface subsystems of software products have received attention since the interface has marked influence in software acceptance. This paper presents an over view of research done in the intersection of these areas. The main approaches and the current problems of user interfaces for Geographical Information Systems are discussed and analyzed. This study concludes with open problems and new research directions for future work in this area.},
author = {Juliano Lopes de Oliveira and Claudia Bauzer Medeiros},
date = {1996-07-01},
institution = {IC/Unicamp},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/96-06.ps},
number = {IC-96-06},
title = {User Interface Issues in Geographic Information Systems},
type = {Technical Report},
year = {1996}
}
Recently, much research effort has been employed in the area of Geographic Infor mation Systems due to the vast potential for applications of this technology. Simul taneously, user interface subsystems of software products have received attention since the interface has marked influence in software acceptance. This paper presents an over view of research done in the intersection of these areas. The main approaches and the current problems of user interfaces for Geographical Information Systems are discussed and analyzed. This study concludes with open problems and new research directions for future work in this area.
|
Aguiar, C. Dutra de;
Medeiros, Claudia Bauzer
Building an Unified Basic Model from Stand-Alone Systems (conference)
Proceedings of GIS Brazil '96,
Curitiba, PR, Brazil,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{deAguiar1996,
abstract = {Urban planning is one of the main areas that use geographic information systems. The base of the urban planning implementation is the Basic Urban Mapping (BUM), which is the set of graphic and alphanumeric informations related to the cartographic base of the cadastral plant. The BUM modeling depends on the necessity and purpose of the applications and the investment directed for this, besides different feelings of users. In this way, different applications in general don’t share their models during the development of their applications, increasing their cost. This paper describes an integration experience of two real life applications: Telebrás (Telecomunicações Brasileiras S/A) and Eletropaulo (Eletricidade de São Paulo S/A).},
address = {Curitiba, PR, Brazil},
author = {C. Dutra de Aguiar and Claudia Bauzer Medeiros},
booktitle = {Proceedings of GIS Brazil '96},
date = {1996-05-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/gisbrasil96a-1.pdf},
note = {In portuguese},
title = {Building an Unified Basic Model from Stand-Alone Systems},
year = {1996}
}
Urban planning is one of the main areas that use geographic information systems. The base of the urban planning implementation is the Basic Urban Mapping (BUM), which is the set of graphic and alphanumeric informations related to the cartographic base of the cadastral plant. The BUM modeling depends on the necessity and purpose of the applications and the investment directed for this, besides different feelings of users. In this way, different applications in general don’t share their models during the development of their applications, increasing their cost. This paper describes an integration experience of two real life applications: Telebrás (Telecomunicações Brasileiras S/A) and Eletropaulo (Eletricidade de São Paulo S/A).
|
Vasconcelos, Raimundo Claudio da Silva
Comparative Analysis of the Use of Relational and OO Models in GIS (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{daVasconcelos1996,
abstract = {Geographical Information Systems (GIS) are known as non-conventional applications, and so they need different methodologies from those used for conventional applications. However, most of existent GIS in the market use conventional tools. The goal of this thesis is to verify, through an example - UNINet, the use of ER and object oriented models (OMT) in a GIS. THe modeling and implementation in relational (SQL92) and object oriented DBMS (O2) are analyzed, deriving the results.},
author = {Raimundo Claudio da Silva Vasconcelos},
date = {1996-05-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/VasconcelosRaimundoClaudiodaSilva_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Comparative Analysis of the Use of Relational and OO Models in GIS},
year = {1996}
}
Geographical Information Systems (GIS) are known as non-conventional applications, and so they need different methodologies from those used for conventional applications. However, most of existent GIS in the market use conventional tools. The goal of this thesis is to verify, through an example - UNINet, the use of ER and object oriented models (OMT) in a GIS. THe modeling and implementation in relational (SQL92) and object oriented DBMS (O2) are analyzed, deriving the results.
|
Wainer, J.;
Weske, M.;
Vossen, G.;
Medeiros, C. B.
Scientific Workflow Systems (conference)
Proc. NSF Workshop on Workflow and Process Automation: State-of-the-art and Future Directions,
Athens, GA,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Wainer1996,
address = {Athens, GA},
author = {J. Wainer and M. Weske and G. Vossen and C. B. Medeiros},
booktitle = {Proc. NSF Workshop on Workflow and Process Automation: State-of-the-art and Future Directions},
date = {1996-05-01},
keyword = {Conference},
title = {Scientific Workflow Systems},
year = {1996}
}
|
Pires, Fatima;
Medeiros, Claudia Bauzer
A Computational Environment for Geographic Applications Design Support (conference)
Proceedings of GIS Brazil '96,
Curitiba, PR, Brazil,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Pires1996,
address = {Curitiba, PR, Brazil},
author = {Fatima Pires and Claudia Bauzer Medeiros},
booktitle = {Proceedings of GIS Brazil '96},
date = {1996-05-01},
keyword = {Conference},
note = {In portuguese},
title = {A Computational Environment for Geographic Applications Design Support},
year = {1996}
}
|
Medeiros, Claudia Bauzer;
Botelho, Marcio
Handling Time in GIS (conference)
In Proceedings of GIS Brazil '96,
Curitiba, PR, Brazil,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1996b,
address = {Curitiba, PR, Brazil},
author = {Claudia Bauzer Medeiros and Marcio Botelho},
booktitle = {In Proceedings of GIS Brazil '96},
date = {1996-05-01},
keyword = {Conference},
note = {In portuguese},
title = {Handling Time in GIS},
year = {1996}
}
|
Cilia, Mariano A.
Active Database System Support for Topological Constraints in Geographical Information Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1996.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Cilia1996,
abstract = {This dissertation concerns the use of active databases in geographic applications. The results presented here extend the active database systems paradigm to solve the problem of maintaining spatial (topological) constraints. The solution for this problem is divided in three steps: i) topological constraint specification; ii) translation of the constraint into rules; and iii) automatic constraint maintenance, using the generated rules. This approach was used in the development of an active system prototype that incorporates an object-oriented geographic model, thus removing the gap between GIS and rule systems. The main contributions presented are a detailed study about binary topological relationships; a complete proposal for the problem of maintaining these relationships; and the definition of algorithms to verify the topological integrity (these algorithms are incorporated in the prototype).},
author = {Mariano A. Cilia},
date = {1996-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/CiliaMarianoAriel_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Active Database System Support for Topological Constraints in Geographical Information Systems},
year = {1996}
}
This dissertation concerns the use of active databases in geographic applications. The results presented here extend the active database systems paradigm to solve the problem of maintaining spatial (topological) constraints. The solution for this problem is divided in three steps: i) topological constraint specification; ii) translation of the constraint into rules; and iii) automatic constraint maintenance, using the generated rules. This approach was used in the development of an active system prototype that incorporates an object-oriented geographic model, thus removing the gap between GIS and rule systems. The main contributions presented are a detailed study about binary topological relationships; a complete proposal for the problem of maintaining these relationships; and the definition of algorithms to verify the topological integrity (these algorithms are incorporated in the prototype).
|
Weske, M.;
Vossen, G.;
Medeiros, C. B.
Scientific Workflow Management: WASA Architecture and Applications (Technical Report)
Fachbericht Angewandte Mathematik und Informatik Universität Münster,
Technical Report,
03/96-I,
1996.
(
BibTeX |
Tags:
Techreport
)
@techreport{Weske1996,
author = {M. Weske and G. Vossen and C. B. Medeiros},
date = {1996-01-01},
institution = {Fachbericht Angewandte Mathematik und Informatik Universität Münster},
keyword = {Techreport},
number = {03/96-I},
title = {Scientific Workflow Management: WASA Architecture and Applications},
type = {Technical Report},
year = {1996}
}
|
Meidanis, J.;
Vossen, G.;
Weske, M.
Using Workflow Management in DNA Sequencing (conference)
Proceedings of the First IFCIS International Conference on Cooperative Information Systems CoopIS'96,
1996.
(
Abstract |
BibTeX |
Tags:
Conference
)
@conference{Meidanis1996,
abstract = {DNA fragment assembly is an area which makes intensive use of computers. However, computer users in this field are typically not experts in computer science, but build their working environment on an ad-hoc basis. In this situation, it seems appropriate to offer a kind of support which can contribute to a better organization of working environments, and a better exploitation of computer hardware and software. The authors describe an approach in this direction based on the emerging paradigms of workflow modeling and management. In particular they offer three contributions: first, they discuss why workflow management can be fruitfully adopted in DNA fragment assembly, and describe one way to perceive and model sequencing processes as workflows. Second, they outline an architecture of a system intended to support sequencing applications, whose core component is a workflow management system. Finally, they sketch their experience of building a prototype using commercial workflow management technology.},
author = {J. Meidanis and G. Vossen and M. Weske},
booktitle = {Proceedings of the First IFCIS International Conference on Cooperative Information Systems CoopIS'96},
date = {1996-01-01},
keyword = {Conference},
note = {Full Version in Fachbericht Angewandte Mathematik und Informatik 23/95-I, Universität Münster, 1995},
pages = {114-123},
title = {Using Workflow Management in DNA Sequencing},
year = {1996}
}
DNA fragment assembly is an area which makes intensive use of computers. However, computer users in this field are typically not experts in computer science, but build their working environment on an ad-hoc basis. In this situation, it seems appropriate to offer a kind of support which can contribute to a better organization of working environments, and a better exploitation of computer hardware and software. The authors describe an approach in this direction based on the emerging paradigms of workflow modeling and management. In particular they offer three contributions: first, they discuss why workflow management can be fruitfully adopted in DNA fragment assembly, and describe one way to perceive and model sequencing processes as workflows. Second, they outline an architecture of a system intended to support sequencing applications, whose core component is a workflow management system. Finally, they sketch their experience of building a prototype using commercial workflow management technology.
|
Medeiros, C. B.;
Vossen, G.;
Weske, M.
GEO-WASA - Combining GIS Technology with Workflow Management (conference)
Proceedings of the Seventh Israeli Conference on Computer Systems and Software Engineering,
IEEE Computer Society Press,
Los Alamitos, CA,
1996.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1996,
address = {Los Alamitos, CA},
author = {C. B. Medeiros and G. Vossen and M. Weske},
booktitle = {Proceedings of the Seventh Israeli Conference on Computer Systems and Software Engineering},
date = {1996-01-01},
keyword = {Conference},
note = {Full version in Fachbericht Angewandte Mathematik und Informatik 02/96-I, Universität Münster, 1996.},
pages = {129-139},
publisher = {IEEE Computer Society Press},
title = {GEO-WASA - Combining GIS Technology with Workflow Management},
year = {1996}
}
|
1995 |
Botelho, Marcio de Araujo
Incorporation of Spatial-Temporal Facilities in OODB (mastersthesis)
Instituto de Computaçao - Unicamp,
mastersthesis,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deBotelho1995,
abstract = {This dissertation presents a framework to incorporate support for spatial-temporal data in object oriented database management systems. The main contributions are: (i) Description of a spatial-temporal object oriented data model, allowing the representation of spatial-temporal data evolution, common in geographic information systems; (ii) Definition of data structures in a object oriented database to support the model, storing spatial data in the vector format. This structures make possible to store the temporal evolution of the objects, which encapsulate access methods to their temporal states; (iii) Specification of a taxonomy of spatial-temporal queries in geographic information systems. This proposal extends other GIS models, bringing the possibility to incorporate new facilities in future systems.},
author = {Marcio de Araujo Botelho},
date = {1995-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/BotelhoMarciodeAraujo_M.pdf},
school = {Instituto de Computaçao - Unicamp},
title = {Incorporation of Spatial-Temporal Facilities in OODB},
year = {1995}
}
This dissertation presents a framework to incorporate support for spatial-temporal data in object oriented database management systems. The main contributions are: (i) Description of a spatial-temporal object oriented data model, allowing the representation of spatial-temporal data evolution, common in geographic information systems; (ii) Definition of data structures in a object oriented database to support the model, storing spatial data in the vector format. This structures make possible to store the temporal evolution of the objects, which encapsulate access methods to their temporal states; (iii) Specification of a taxonomy of spatial-temporal queries in geographic information systems. This proposal extends other GIS models, bringing the possibility to incorporate new facilities in future systems.
|
Medeiros, Claudia Bauzer;
Cilia, Mariano
Maintenance of Binary Topological Constraints through Active Databases (conference)
Proceedings of the 3rd ACM Workshop on Advances in GIS,
ACM,
Baltimore, USA,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros1995b,
abstract = {This paper presents a system developed at UNICAMP for automatically maintaining topological constraints in a geographic database. This system is based on extending to spatial data the notion of standard integrity maintenance through active databases. Topological relations, defined by the user, are transformed into spatial integrity constraints, which are stored in the database as production rules. These rules are used to maintain the corresponding set of topological relations, for all applications that use the database. This extends previous work on rules and gis by incorporating the rules into the DBMS rather than having them handled by a separate module.},
address = {Baltimore, USA},
author = {Claudia Bauzer Medeiros and Mariano Cilia},
booktitle = {Proceedings of the 3rd ACM Workshop on Advances in GIS},
date = {1995-12-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/acm-gis95.pdf},
pages = {127-134},
publisher = {ACM},
title = {Maintenance of Binary Topological Constraints through Active Databases},
year = {1995}
}
This paper presents a system developed at UNICAMP for automatically maintaining topological constraints in a geographic database. This system is based on extending to spatial data the notion of standard integrity maintenance through active databases. Topological relations, defined by the user, are transformed into spatial integrity constraints, which are stored in the database as production rules. These rules are used to maintain the corresponding set of topological relations, for all applications that use the database. This extends previous work on rules and gis by incorporating the rules into the DBMS rather than having them handled by a separate module.
|
Oliveira, J. L. de;
Medeiros, Claudia Bauzer
A Direct Manipulation User Interface for Querying Geographic Databases (conference)
Proceedings of the International Conference Aplications of Databases (ADB '95),
San Jose, USA,
1995.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{deOliveira1995,
address = {San Jose, USA},
author = {J. L. de Oliveira and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the International Conference Aplications of Databases (ADB '95)},
date = {1995-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/adb95.pdf},
title = {A Direct Manipulation User Interface for Querying Geographic Databases},
year = {1995}
}
|
Medeiros, Claudia Bauzer;
Pires, F.
A Computational Environment for GIS Applications Modeling (conference)
Notes on Geo-referenced Information (CIG),
1995.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1995b,
author = {Claudia Bauzer Medeiros and F. Pires},
booktitle = {Notes on Geo-referenced Information (CIG)},
date = {1995-11-01},
keyword = {Conference},
note = {In portuguese},
title = {A Computational Environment for GIS Applications Modeling},
year = {1995}
}
|
Oliveira, J. L.;
Cunha, C. Q.;
Magalhães, G. C.
Object Model for Constructing Dynamic Visual Interfaces (conference)
Proceedings of the Brazilian Symposium on Software Engineering,
Recife, PE, Brazil,
1995.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1995,
address = {Recife, PE, Brazil},
author = {J. L. Oliveira and C. Q. Cunha and G. C. Magalhães},
booktitle = {Proceedings of the Brazilian Symposium on Software Engineering},
date = {1995-10-01},
keyword = {Conference},
note = {In portuguese},
title = {Object Model for Constructing Dynamic Visual Interfaces},
year = {1995}
}
|
Aguiar, C. D. de;
Medeiros, Claudia Bauzer
An Architecture for the Integration of Heterogeneous Databases Applied to Urban Planning Systems (conference)
Proceedings of the Integrated Seminar on Software and Hardware,
Canela, RS, Brazil,
1995.
(
BibTeX |
Tags:
Conference
)
@conference{deAguiar1995b,
address = {Canela, RS, Brazil},
author = {C. D. de Aguiar and Claudia Bauzer Medeiros},
booktitle = {Proceedings of the Integrated Seminar on Software and Hardware},
date = {1995-08-01},
keyword = {Conference},
note = {In portuguese},
pages = {551--562},
title = {An Architecture for the Integration of Heterogeneous Databases Applied to Urban Planning Systems},
year = {1995}
}
|
Lucena, F.;
Liesenberg, H.;
Buzato, L.
Xchart-Based Complex Dialogue Development (conference)
Proceedings of the Nipo-Brazilian Symposium on Science and Technology,
Campos do Jordao, SP, Brazil,
1995.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Lucena1995b,
address = {Campos do Jordao, SP, Brazil},
author = {F. Lucena and H. Liesenberg and L. Buzato},
booktitle = {Proceedings of the Nipo-Brazilian Symposium on Science and Technology},
date = {1995-08-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/nipo.08.95.ps},
title = {Xchart-Based Complex Dialogue Development},
year = {1995}
}
|
Lucena, F.;
Liesenberg, H.
Human Computer Interface Construction: A proposal for undergraduate discipline (conference)
Proceedings of the Workshop on Education in Informatics (XV SBC),
Canela, RS, Brazil,
1995.
(
Links |
BibTeX |
Tags:
Conference
)
@conference{Lucena1995,
address = {Canela, RS, Brazil},
author = {F. Lucena and H. Liesenberg},
booktitle = {Proceedings of the Workshop on Education in Informatics (XV SBC)},
date = {1995-08-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/wei.08.95.ps},
note = {In portuguese},
title = {Human Computer Interface Construction: A proposal for undergraduate discipline},
year = {1995}
}
|
Cilia, M. A.
Active Databases (conference)
Proceedings of the 24 JAIIO,
Buenos Aires, Argentina,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Cilia1995,
abstract = {Os bancos de dados ativos são sistemas de banco de dados estendidos com um sistema de regras. Este sistema é capaz de reconhecer eventos, ativar as regras correspondentes e quando a condição é verdadeira executa as ações que correspondam, segundo o paradigma E-C-A proposto em [DBM88]. Este sistemas podem ser usados para aplicações financeiras (commodity trading, portfolio management, currency trading, etc.), aplicações multimídia, controle da produção industrial (CIM, controle de inventario, etc.), monitoramento (controle de tráfego aéreo, etc), entre outros [Buch94]. Também são usados para funções do próprio núcleo do banco de dados, como por exemplo, manutenção de consistência, manutenção de visões, controle de acesso, gerenciamento de versões, entre outras. Neste artigo descrevem-se as principais características dos bancos de dados ativos que são classificadas em três grupos: definição de regras, modelo de execução e otimização. Logo são descritos as características ativas dos principais protótipos desenvolvidos na área.},
address = {Buenos Aires, Argentina},
author = {M. A. Cilia},
booktitle = {Proceedings of the 24 JAIIO},
date = {1995-08-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/adb-24j.pdf},
note = {In portuguese},
title = {Active Databases},
year = {1995}
}
Os bancos de dados ativos são sistemas de banco de dados estendidos com um sistema de regras. Este sistema é capaz de reconhecer eventos, ativar as regras correspondentes e quando a condição é verdadeira executa as ações que correspondam, segundo o paradigma E-C-A proposto em [DBM88]. Este sistemas podem ser usados para aplicações financeiras (commodity trading, portfolio management, currency trading, etc.), aplicações multimídia, controle da produção industrial (CIM, controle de inventario, etc.), monitoramento (controle de tráfego aéreo, etc), entre outros [Buch94]. Também são usados para funções do próprio núcleo do banco de dados, como por exemplo, manutenção de consistência, manutenção de visões, controle de acesso, gerenciamento de versões, entre outras. Neste artigo descrevem-se as principais características dos bancos de dados ativos que são classificadas em três grupos: definição de regras, modelo de execução e otimização. Logo são descritos as características ativas dos principais protótipos desenvolvidos na área.
|
Ciferri, Ricardo Rodrigues
Benchmarks for Geographic Information Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Ciferri1995,
abstract = {Geographical Information Systems (GIS) deal with data that are special in nature and size. Thus, the technologies developed for conventional data base systems such as access methods, query optimizers and languages, have to be modified in order to satisfy the needs of a GIS. These modifications, embedded in several GIS, or being proposed by research projects, need to be evaluated. This thesis proposes mechanisms for evaluating GIS based on benchmarks. The benchmark is composed of a workload to be submitted to the GIS being analysed and data characterizing the information. The workload is made of a set of primitive transactions that can be combined in order to derive transactions of any degree of complexity. These primitive transactions are oriented to spatial data but not dependent on the way they are represented (vector or raster). The benchmark data base characterization was defined in terms of the types of data required by applications that use georeferencing, and by the need to generate complex and controlled artificial data. The proposed technique and methods were used to show how to create the transactions and the data for a given application.},
author = {Ricardo Rodrigues Ciferri},
date = {1995-06-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/CiferriRicardoRodrigues_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Benchmarks for Geographic Information Systems},
year = {1995}
}
Geographical Information Systems (GIS) deal with data that are special in nature and size. Thus, the technologies developed for conventional data base systems such as access methods, query optimizers and languages, have to be modified in order to satisfy the needs of a GIS. These modifications, embedded in several GIS, or being proposed by research projects, need to be evaluated. This thesis proposes mechanisms for evaluating GIS based on benchmarks. The benchmark is composed of a workload to be submitted to the GIS being analysed and data characterizing the information. The workload is made of a set of primitive transactions that can be combined in order to derive transactions of any degree of complexity. These primitive transactions are oriented to spatial data but not dependent on the way they are represented (vector or raster). The benchmark data base characterization was defined in terms of the types of data required by applications that use georeferencing, and by the need to generate complex and controlled artificial data. The proposed technique and methods were used to show how to create the transactions and the data for a given application.
|
Aguiar, Cristina Dutra de
Heterogeneous Database Integration into Urban Planning Applications (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deAguiar1995,
abstract = {The name Geographic Information Systems (GIS) denotes software that handles georeferenced data - data connected spatially to the earth surface. Modern GIS are based in relational database systems, extended to efficiently support georeferenced applications. Recent studies indicate that the object oriented paradigm is more adequate for this type of system. However, the migration of relational to object oriented systems is costly. This dissertation presents a solution to this problem, which consists in defining mechanisms that allow the integration of the present (relation based) systems and the new (object based) systems, with emphasis in urban applications. The architecture proposed integrates object oriented and relational DBMS designed, respectively, using the OMT and ECR models. In order to allow this integration the dissertation developed primitive operations for mapping between both data models, as well as primitives for converting OMT schemas into schemas of the O2 object oriented DBMS. This proposal was validated through the integration of two real life applications which use the basic elements of urban planning: Telebr s' telephone network management system and Eletropaulo's electrical network management system.},
author = {Cristina Dutra de Aguiar},
date = {1995-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/AguiarCristinaDutrade_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Heterogeneous Database Integration into Urban Planning Applications},
year = {1995}
}
The name Geographic Information Systems (GIS) denotes software that handles georeferenced data - data connected spatially to the earth surface. Modern GIS are based in relational database systems, extended to efficiently support georeferenced applications. Recent studies indicate that the object oriented paradigm is more adequate for this type of system. However, the migration of relational to object oriented systems is costly. This dissertation presents a solution to this problem, which consists in defining mechanisms that allow the integration of the present (relation based) systems and the new (object based) systems, with emphasis in urban applications. The architecture proposed integrates object oriented and relational DBMS designed, respectively, using the OMT and ECR models. In order to allow this integration the dissertation developed primitive operations for mapping between both data models, as well as primitives for converting OMT schemas into schemas of the O2 object oriented DBMS. This proposal was validated through the integration of two real life applications which use the basic elements of urban planning: Telebr s' telephone network management system and Eletropaulo's electrical network management system.
|
Oliveira, Juliano Lopes de;
Medeiros, Claudia Bauzer
A Direct Manipulation User Interface for Querying Geographic Databases (Technical Report)
IC/Unicamp,
Technical Report,
DCC-95-08,
1995.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deOliveira1995b,
abstract = {This paper presents an architecture for a direct manipulation user interface for browsing and querying geographic data. This interface rovides users with a high level object oriented conceptual view of the underlying database, independent of the databaseś native data model. It lets users manipulate different representations of a single georeferenced entity, thereby adding a new degree of flexibility to querying facilities.},
author = {Juliano Lopes de Oliveira and Claudia Bauzer Medeiros},
date = {1995-01-01},
institution = {IC/Unicamp},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/95-08.ps},
number = {DCC-95-08},
title = {A Direct Manipulation User Interface for Querying Geographic Databases},
type = {Technical Report},
year = {1995}
}
This paper presents an architecture for a direct manipulation user interface for browsing and querying geographic data. This interface rovides users with a high level object oriented conceptual view of the underlying database, independent of the databaseś native data model. It lets users manipulate different representations of a single georeferenced entity, thereby adding a new degree of flexibility to querying facilities.
|
Medeiros, Claudia Bauzer;
Vossen, G.;
Weske, M.
WASA - a Workflow-based Architecture to Support Scientific Database Applications (conference)
Proceedings of the DEXA '95,
London, UK,
1995.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1995b,
address = {London, UK},
author = {Claudia Bauzer Medeiros and G. Vossen and M. Weske},
booktitle = {Proceedings of the DEXA '95},
date = {1995-01-01},
keyword = {Conference},
title = {WASA - a Workflow-based Architecture to Support Scientific Database Applications},
year = {1995}
}
|
Medeiros, C. B.;
Vossen, G.;
Weske, M.
WASA: A Workflow-Based Architecture to Support Scientific Database Applications (conference)
Proceedings of the 6th DEXA Conference,
1995.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1995,
author = {C. B. Medeiros and G. Vossen and M. Weske},
booktitle = {Proceedings of the 6th DEXA Conference},
date = {1995-01-01},
keyword = {Conference},
note = {Extended Version: Fachbericht Angewandte Mathematik und Informatik 02/95-I, Universität Münster, 1995},
title = {WASA: A Workflow-Based Architecture to Support Scientific Database Applications},
year = {1995}
}
|
1994 |
Sampaio, Pedro Rafael Falcone
Dynamic Constraints in Active Object-Oriented Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1994.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Sampaio1994,
abstract = {This dissertation addresses the problem of modeling and enforcing general integrity constraints in database systems. The solution is based on the use of active object-oriented database management systems (DBMS) that provide support to rule mechanisms. The work proposes a strategy to be applied during application design. This strategy takes into consideration the behavior and active features of the DBMS. The strategy's goal is to represent the constraints into conceptual design using CDL - a declarative and model independent language and to provide mappings in terms of production rules responsible for constraint enforcement. The main contributions presented are: the proposal of a taxonomy for integrity constraints in modeling information systems; the specification of the CDL constraint language; general heuristics for mapping constraints expressed in CDL into production rules in the active database; and the specification of the characteristics needed from active database in order to support general integrity constraints in information systems. This dissertation extends previous proposal found in the literature, providing support to model dynamic constraints in database system design using active object-oriented DBMS.},
author = {Pedro Rafael Falcone Sampaio},
date = {1994-12-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SampaioPedroRafaelFalcone.pdf},
school = {Instituto de Computação - Unicamp},
title = {Dynamic Constraints in Active Object-Oriented Databases},
year = {1994}
}
This dissertation addresses the problem of modeling and enforcing general integrity constraints in database systems. The solution is based on the use of active object-oriented database management systems (DBMS) that provide support to rule mechanisms. The work proposes a strategy to be applied during application design. This strategy takes into consideration the behavior and active features of the DBMS. The strategy's goal is to represent the constraints into conceptual design using CDL - a declarative and model independent language and to provide mappings in terms of production rules responsible for constraint enforcement. The main contributions presented are: the proposal of a taxonomy for integrity constraints in modeling information systems; the specification of the CDL constraint language; general heuristics for mapping constraints expressed in CDL into production rules in the active database; and the specification of the characteristics needed from active database in order to support general integrity constraints in information systems. This dissertation extends previous proposal found in the literature, providing support to model dynamic constraints in database system design using active object-oriented DBMS.
|
Medeiros, Claudia Bauzer
A Multi-disciplinary Project on Geoprocessing Methods and Techniques (conference)
GIS Brazil '94,
Curitiba, Brazil,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1994b,
address = {Curitiba, Brazil},
author = {Claudia Bauzer Medeiros},
booktitle = {GIS Brazil '94},
date = {1994-10-01},
keyword = {Conference},
note = {In Portuguese},
pages = {29-37},
title = {A Multi-disciplinary Project on Geoprocessing Methods and Techniques},
year = {1994}
}
|
Leite, N. J.;
Roberto, M. V. S.;
Oliveira, C. M. Marques de
A System to Extract Information from Nautical Charts (conference)
GIS Brazil '94,
Curitiba, Brazil,
67-76,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Leite1994,
address = {Curitiba, Brazil},
author = {N. J. Leite and M. V. S. Roberto and C. M. Marques de Oliveira},
booktitle = {GIS Brazil '94},
date = {1994-10-01},
keyword = {Conference},
note = {In Portuguese},
number = {67-76},
title = {A System to Extract Information from Nautical Charts},
year = {1994}
}
|
Brayner, A. R. A.;
Medeiros, C. Bauzer
Incorporating Time in an Object Oriented Database System (conference)
Brazilian Symposium on Database Systems,
Sao Carlos, Brazil,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Brayner1994b,
address = {Sao Carlos, Brazil},
author = {A. R. A. Brayner and C. Bauzer Medeiros},
booktitle = {Brazilian Symposium on Database Systems},
date = {1994-09-01},
keyword = {Conference},
note = {In Portuguese},
pages = {16-29},
title = {Incorporating Time in an Object Oriented Database System},
year = {1994}
}
|
Oliveira, J. L.
On the Development of User Interface Systems for Object-Oriented Databases (conference)
ACM Workshop on Advanced Visual Interfaces,
Bari, Italy,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1994b,
address = {Bari, Italy},
author = {J. L. Oliveira},
booktitle = {ACM Workshop on Advanced Visual Interfaces},
date = {1994-06-01},
keyword = {Conference},
pages = {237-239},
title = {On the Development of User Interface Systems for Object-Oriented Databases},
year = {1994}
}
|
Medeiros, Claudia Bauzer;
Jomier, Geneviéve
Using Versions in GIS (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
94-05,
1994.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Medeiros1994b,
abstract = {Geographic information systems GIS have become important tools in public planning activities (e.g, in environmental or utility management). This type of activity requires the creation and management of alternative scenarios, as well as analysis of temporal data evolution. Existing systems provide limited support for these operations, and appropriate tools are yet to be developed. This paper presents a solution to this problem. This solution is based on managing temporal data and alternatives using the DBV version mechanism. It provides efficient handling and storage of versions, and supports the creation of alternatives for decision-making activities. A reduced version of this report appeared in the Proceedings of the DEXA9́4 Conference --- 5th International Conference on Database and Expert Systems Applications, Athens, Greece.},
author = {Claudia Bauzer Medeiros and Geneviéve Jomier},
date = {1994-06-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/94-05.pdf},
number = {94-05},
title = {Using Versions in GIS},
type = {Technical Report},
year = {1994}
}
Geographic information systems GIS have become important tools in public planning activities (e.g, in environmental or utility management). This type of activity requires the creation and management of alternative scenarios, as well as analysis of temporal data evolution. Existing systems provide limited support for these operations, and appropriate tools are yet to be developed. This paper presents a solution to this problem. This solution is based on managing temporal data and alternatives using the DBV version mechanism. It provides efficient handling and storage of versions, and supports the creation of alternatives for decision-making activities. A reduced version of this report appeared in the Proceedings of the DEXA9́4 Conference --- 5th International Conference on Database and Expert Systems Applications, Athens, Greece.
|
Brayner, Ângelo Roncalli Alencar;
Medeiros, Claudia Bauzer
Incorporating Time in an Object Oriented Database System (in portuguese) (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
94-02,
1994.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Brayner1994b,
abstract = {Este trabalho descreve a Camada de Gerenciamento Temporal, um subsistema de gerenciamento temporal implementado para o banco de dados orientado a objetos O2. Este subsistema permite a definição de esquemas temporais e a manipulação (consultas e atualizações) de objetos nesses esquemas. Estes comandos são traduzidos pelo subsistema em programas e consultas executados pelo banco de dados subjacente. O trabalho apresentado contribui para a discussão sobre a implementação de sistemas temporais orientados a objetos, pouco explorado na literatura.},
author = {Ângelo Roncalli Alencar Brayner and Claudia Bauzer Medeiros},
date = {1994-04-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/94-02.pdf},
number = {94-02},
title = {Incorporating Time in an Object Oriented Database System (in portuguese)},
type = {Technical Report},
year = {1994}
}
Este trabalho descreve a Camada de Gerenciamento Temporal, um subsistema de gerenciamento temporal implementado para o banco de dados orientado a objetos O2. Este subsistema permite a definição de esquemas temporais e a manipulação (consultas e atualizações) de objetos nesses esquemas. Estes comandos são traduzidos pelo subsistema em programas e consultas executados pelo banco de dados subjacente. O trabalho apresentado contribui para a discussão sobre a implementação de sistemas temporais orientados a objetos, pouco explorado na literatura.
|
Medeiros, C. B.;
Pires, F.
Databases for GIS (article)
In ACM SIGMOD Record,
1,
1994.
(
Links |
BibTeX |
Tags:
Article
)
@article{Medeiros1994,
author = {C. B. Medeiros and F. Pires},
date = {1994-03-01},
journal = {In ACM SIGMOD Record},
keyword = {Article},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/GIS-1.pdf},
number = {1},
pages = {107-115},
title = {Databases for GIS},
volume = {23},
year = {1994}
}
|
Brayner, Ângelo Roncalli Alencar
Implementation of a Temporal System in an Object Oriented Database (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1994.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Brayner1994,
abstract = {Many temporal data models have been suggested. A great number of these models is based on incorporating time only for relational database systems. However, the applications that require temporal data managment presents a object-oriented nature. Research on object-oriented database systems is still in its initial phase. This work presentes a practical contribution to the research in this area. This contribution consists in the development of a temporal data management system for an object oriented database. This system -The temporal Management Layer- was built on top of the O2 database system and allows the definition and management of object oriented temporal data, as well as the processing of temporal queries.},
author = {Ângelo Roncalli Alencar Brayner},
date = {1994-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/BraynerÂngeloRoncalliAlencar.pdf},
school = {Instituto de Computação - Unicamp},
title = {Implementation of a Temporal System in an Object Oriented Database},
year = {1994}
}
Many temporal data models have been suggested. A great number of these models is based on incorporating time only for relational database systems. However, the applications that require temporal data managment presents a object-oriented nature. Research on object-oriented database systems is still in its initial phase. This work presentes a practical contribution to the research in this area. This contribution consists in the development of a temporal data management system for an object oriented database. This system -The temporal Management Layer- was built on top of the O2 database system and allows the definition and management of object oriented temporal data, as well as the processing of temporal queries.
|
Oliveira, L. M.;
Medeiros, C. Bauzer
Managing Time in Object-Oriented Databases (conference)
In Proceedings of the Integrated Seminar on Software and Hardware,
Caxambu, Brazil,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1994,
address = {Caxambu, Brazil},
author = {L. M. Oliveira and C. Bauzer Medeiros},
booktitle = {In Proceedings of the Integrated Seminar on Software and Hardware},
date = {1994-01-01},
keyword = {Conference},
pages = {459-474},
title = {Managing Time in Object-Oriented Databases},
year = {1994}
}
|
Medeiros, C. B.;
Casanova, M. A.;
Camara, G.
The DOMUS Project - Building an OODB GIS for Environmental Control (conference)
In Proceedings of the International Workshop on Advanced Research in GIS (IGIS '94),
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1994b,
author = {C. B. Medeiros and M. A. Casanova and G. Camara},
booktitle = {In Proceedings of the International Workshop on Advanced Research in GIS (IGIS '94)},
date = {1994-01-01},
keyword = {Conference},
note = {Also in Lecture Notes in Computer Science no. 884},
pages = {45-54},
title = {The DOMUS Project - Building an OODB GIS for Environmental Control},
year = {1994}
}
|
Camara, G.;
Freitas, U.;
Souza, R.;
Casanova, M.;
Hemerly, A.;
Medeiros, C. B.
A Model to Cultivate Objects and Manipulate Fields (conference)
ACM Workshop on Advances in GIS,
1994.
(
BibTeX |
Tags:
Conference
)
@conference{Camara1994,
author = {G. Camara and U. Freitas and R. Souza and M. Casanova and A. Hemerly and C. B. Medeiros},
booktitle = {ACM Workshop on Advances in GIS},
date = {1994-01-01},
keyword = {Conference},
pages = {20-28},
title = {A Model to Cultivate Objects and Manipulate Fields},
year = {1994}
}
|
1993 |
Oliveira, J. L.;
Anido, R. O.
Browsing and Querying in Object Oriented Databases (conference)
In Proceedings of the Second International Conference on Information and Knowledge Management,
Washington, DC, USA,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Oliveira1993b,
abstract = {We present a new interface for Object-Oriented Database Management Systems (OODBMSs). The GOODIES system combines and expands the functions of many existing interface systems, introducing some new concepts for improved browsing in an OODBMS. The implementation of GOODIES proposes a new approach to database interfaces development: instead of being strongly dependent of the underlying DBMS, GOODIES is based on the main features of the object-oriented data model. The system design is based on an internal model and on an external model. The internal model defines the relationships that bind the interface to the DBMS, and it is fully described in [Oli92]. The external model determines the possible interaction between the user and the interface system. This paper describes the concepts of the external model of the GOODIES system.},
address = {Washington, DC, USA},
author = {J. L. Oliveira and R. O. Anido},
booktitle = {In Proceedings of the Second International Conference on Information and Knowledge Management},
date = {1993-11-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/browsing-and-quering-in-object-oriented-databases.pdf},
pages = {364-373},
title = {Browsing and Querying in Object Oriented Databases},
year = {1993}
}
We present a new interface for Object-Oriented Database Management Systems (OODBMSs). The GOODIES system combines and expands the functions of many existing interface systems, introducing some new concepts for improved browsing in an OODBMS. The implementation of GOODIES proposes a new approach to database interfaces development: instead of being strongly dependent of the underlying DBMS, GOODIES is based on the main features of the object-oriented data model. The system design is based on an internal model and on an external model. The internal model defines the relationships that bind the interface to the DBMS, and it is fully described in [Oli92]. The external model determines the possible interaction between the user and the interface system. This paper describes the concepts of the external model of the GOODIES system.
|
Pires, F.;
Medeiros, C. B.;
Barros, A.
Modelling Geographic Information Systems using an Object Oriented Framework (conference)
In Proceedings of the XIII International Conference of the Chilean Computer Science Society,
La Serena, Chile,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Pires1993b,
address = {La Serena, Chile},
author = {F. Pires and C. B. Medeiros and A. Barros},
booktitle = {In Proceedings of the XIII International Conference of the Chilean Computer Science Society},
date = {1993-10-01},
keyword = {Conference},
pages = {217-232},
title = {Modelling Geographic Information Systems using an Object Oriented Framework},
year = {1993}
}
|
Pires, F.;
Medeiros, C. B.
A Methodology for Development of GIS (conference)
In Proceedings of the Brazilian Symposium on Software Engineering,
Rio de Janeiro, Brazil,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Pires1993,
address = {Rio de Janeiro, Brazil},
author = {F. Pires and C. B. Medeiros},
booktitle = {In Proceedings of the Brazilian Symposium on Software Engineering},
date = {1993-10-01},
keyword = {Conference},
note = {In Portuguese},
pages = {351-364},
title = {A Methodology for Development of GIS},
year = {1993}
}
|
Oliveira, J. L.;
Anido, R. O.
Integrating a Browsing Interface in Different Object-Oriented Databases (conference)
In Proceedings of the Thirteenth Brazilian Computer Society Congress,
Florianopolis, SC, Brazil,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1993b,
address = {Florianopolis, SC, Brazil},
author = {J. L. Oliveira and R. O. Anido},
booktitle = {In Proceedings of the Thirteenth Brazilian Computer Society Congress},
date = {1993-09-01},
keyword = {Conference},
note = {In Portuguese},
pages = {61-75},
title = {Integrating a Browsing Interface in Different Object-Oriented Databases},
year = {1993}
}
|
Oliveira, Ronaldo Lopes de
Data Model Transparency in Heterogenous Database Systems (mastersthesis)
Instituto de Computação- Unicamp,
mastersthesis,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deOliveira1993b,
abstract = {Heterogeneous Database Systems (HDBSs) integrate, in a cooperative environment, autonomous and heterogeneous database systems (DBSs). Model transparency in HDBSs is an important property that allows the users to deal with global data using a single model and database language. This work proposes and discusses solutions to support such property in HDBSs built through the integration of network DBSs and relational DBSs. The solutions presented include methodologies for schema conversion and, architectures and algorithms for command transformation. The approach used in this work differs from others published in two main points. First, it assumes that each user will manipulate global data using the data model and database language he was supposed to use before the HDBS exist. Second, it proposes mechanisms to support access to HDBS's data through application programs instead of ad-hoc transactions.},
author = {Ronaldo Lopes de Oliveira},
date = {1993-08-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/OliveiraRonaldoLopesde.pdf},
school = {Instituto de Computação- Unicamp},
title = {Data Model Transparency in Heterogenous Database Systems},
year = {1993}
}
Heterogeneous Database Systems (HDBSs) integrate, in a cooperative environment, autonomous and heterogeneous database systems (DBSs). Model transparency in HDBSs is an important property that allows the users to deal with global data using a single model and database language. This work proposes and discusses solutions to support such property in HDBSs built through the integration of network DBSs and relational DBSs. The solutions presented include methodologies for schema conversion and, architectures and algorithms for command transformation. The approach used in this work differs from others published in two main points. First, it assumes that each user will manipulate global data using the data model and database language he was supposed to use before the HDBS exist. Second, it proposes mechanisms to support access to HDBS's data through application programs instead of ad-hoc transactions.
|
Oliveira, L. M.;
Medeiros, C. Bauzer
Temporal Object Modeling (conference)
In Proceedings of the Latin American Conference on Informatics,
Buenos Aires, Argentina,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1993,
address = {Buenos Aires, Argentina},
author = {L. M. Oliveira and C. Bauzer Medeiros},
booktitle = {In Proceedings of the Latin American Conference on Informatics},
date = {1993-08-01},
keyword = {Conference},
pages = {79-98},
title = {Temporal Object Modeling},
year = {1993}
}
|
Oliveira, Ronaldo Lopes de;
Magalhães, Geovane Cayres
Methodologies for Schema Conversion in Heterogeneous Database Systems (in portuguese) (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
93-17,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deOliveira1993b,
abstract = {Sistemas de Bancos de Dados Heterogêneos (SBDHs) são sistemas que integram, num ambiente cooperativo, sistemas de bancos de dados (SBDs) autônomos e heterogêneos entre si, com relação à semântica dos seus dados e/ou às características dos seus SGBDs (modelo de dados, linguagens de manipulação de dados e aspectos de implementação). Uma propriedade desejável nesses sistemas é a transparência de modelos, que permite ao usuário enxergar e manipular os dados localizados em diferentes SBDs, através do modelo e da linguagem de manipulação de dados que ele utilizava no seu SBD local, antes do mesmo ser incorporado ao SBDH. A propriedade em questão é conseguida através do mapeamento entre os dados e operações dos SBDs componentes. Este trabalho apresenta metodologias para conversão de esquemas em SBDHs construídos para integrar SBDs que utilizam o modelo de dados rede ou o modelo de dados relacional. Essas metodologias são importantes para obtenção da transparência de modelos.},
author = {Ronaldo Lopes de Oliveira and Geovane Cayres Magalhães},
date = {1993-07-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/93-17.pdf},
number = {93-17},
title = {Methodologies for Schema Conversion in Heterogeneous Database Systems (in portuguese)},
type = {Technical Report},
year = {1993}
}
Sistemas de Bancos de Dados Heterogêneos (SBDHs) são sistemas que integram, num ambiente cooperativo, sistemas de bancos de dados (SBDs) autônomos e heterogêneos entre si, com relação à semântica dos seus dados e/ou às características dos seus SGBDs (modelo de dados, linguagens de manipulação de dados e aspectos de implementação). Uma propriedade desejável nesses sistemas é a transparência de modelos, que permite ao usuário enxergar e manipular os dados localizados em diferentes SBDs, através do modelo e da linguagem de manipulação de dados que ele utilizava no seu SBD local, antes do mesmo ser incorporado ao SBDH. A propriedade em questão é conseguida através do mapeamento entre os dados e operações dos SBDs componentes. Este trabalho apresenta metodologias para conversão de esquemas em SBDHs construídos para integrar SBDs que utilizam o modelo de dados rede ou o modelo de dados relacional. Essas metodologias são importantes para obtenção da transparência de modelos.
|
Oliveira, Lincoln Cesar Medina de
Incorporating the Temporal Dimension in Object Oriented Database Systems (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deOliveira1993,
abstract = {The last two decades have witnessed an intensive research on temporal databases. Although several results have already been achieved for temporal realtional systems, there are few proposals considering the incorporation of the temporal dimension to the development of this area. This dissertation persents the following original results: a board survey on temporal models and languages; the proposal of a new temporal model (TOODM), based on the object oriented paradigm; and the specification of a query language (TOOL) for the proposed model.},
author = {Lincoln Cesar Medina de Oliveira},
date = {1993-07-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/OliveiraLincolnCesarMadinade_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Incorporating the Temporal Dimension in Object Oriented Database Systems},
year = {1993}
}
The last two decades have witnessed an intensive research on temporal databases. Although several results have already been achieved for temporal realtional systems, there are few proposals considering the incorporation of the temporal dimension to the development of this area. This dissertation persents the following original results: a board survey on temporal models and languages; the proposal of a new temporal model (TOODM), based on the object oriented paradigm; and the specification of a query language (TOOL) for the proposed model.
|
Oliveira, Lincoln M.;
Medeiros, Claudia Bauzer
Managing Time in Object-Oriented Databases (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
93-14,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Oliveira1993b,
abstract = {This paper presents a new approach for modelling and querying temporal object oriented databases. The model presented in this paper extends previous work in the area, by supporting the evolution of all object properties through time (inheritance, composition and behavior), and allowing temporal schema evolution. A prototype of this model is being implemented as a time-managing layer on top of the O2 object-oriented database system. In order to manipulate our temporal objects, we have extended the O2 query language with temporal constructs, which we also discuss in the paper.},
address = {Institute of Computing, State University of Campinas},
author = {Lincoln M. Oliveira and Claudia Bauzer Medeiros},
date = {1993-07-01},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/93-14.pdf},
number = {93-14},
title = {Managing Time in Object-Oriented Databases},
type = {Technical Report},
year = {1993}
}
This paper presents a new approach for modelling and querying temporal object oriented databases. The model presented in this paper extends previous work in the area, by supporting the evolution of all object properties through time (inheritance, composition and behavior), and allowing temporal schema evolution. A prototype of this model is being implemented as a time-managing layer on top of the O2 object-oriented database system. In order to manipulate our temporal objects, we have extended the O2 query language with temporal constructs, which we also discuss in the paper.
|
Medeiros, Claudia Bauzer;
Magalhães, Geovane Cayres
Rule Application in GIS - a Case Study (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
93-18,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Medeiros1993b,
abstract = {Production rules in database systems have been used mostly for integrity-related issues (e.g., derived data maintenance, authority checking and constraint verification). This paper analyzes the need for using production rules in geographic information systems, for a special family of applications---utility management systems. This framework is applied to a real life large scale application---the development of an integrated database system for the maintenance and expansion of the telephone network in Brazil.},
author = {Claudia Bauzer Medeiros and Geovane Cayres Magalhães},
date = {1993-07-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/93-18.pdf},
number = {93-18},
title = {Rule Application in GIS - a Case Study},
type = {Technical Report},
year = {1993}
}
Production rules in database systems have been used mostly for integrity-related issues (e.g., derived data maintenance, authority checking and constraint verification). This paper analyzes the need for using production rules in geographic information systems, for a special family of applications---utility management systems. This framework is applied to a real life large scale application---the development of an integrated database system for the maintenance and expansion of the telephone network in Brazil.
|
Pires, Fátima;
Medeiros, Claudia Bauzer;
Silva, Ardemiris Barros
Modelling Geographic Information Systems using an Object Oriented Framework (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
93-13,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Pires1993b,
abstract = {Geographic information systems demand the processing of complex data using specialized operations, not available in traditional database systems. Even though there exist commercial systems that provide some of these facilities, there is a lack of proper support, which should cover not only the implementation but also the design stage. This paper answers this latter need, discussing the steps for modelling databases for geographic information systems using the paradigm of object orientation.},
author = {Fátima Pires and Claudia Bauzer Medeiros and Ardemiris Barros Silva},
date = {1993-06-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/93-13.pdf},
number = {93-13},
title = {Modelling Geographic Information Systems using an Object Oriented Framework},
type = {Technical Report},
year = {1993}
}
Geographic information systems demand the processing of complex data using specialized operations, not available in traditional database systems. Even though there exist commercial systems that provide some of these facilities, there is a lack of proper support, which should cover not only the implementation but also the design stage. This paper answers this latter need, discussing the steps for modelling databases for geographic information systems using the paradigm of object orientation.
|
Oliveira, J. L.;
Anido, R. O.
Browse and Query Operations in OODBMS (conference)
In Proceedings of the 8th Brazilian Symposium on Databases,
Campina Grande, PB, Brazil,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Oliveira1993b,
address = {Campina Grande, PB, Brazil},
author = {J. L. Oliveira and R. O. Anido},
booktitle = {In Proceedings of the 8th Brazilian Symposium on Databases},
date = {1993-05-01},
keyword = {Conference},
note = {In Portuguese},
pages = {35-49},
title = {Browse and Query Operations in OODBMS},
year = {1993}
}
|
Oliveira, Juliano Lopes de
A Graphical Tool for Browse and Query in Object-Oriented Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deOliveira1993b,
abstract = {This dissertation analyses the problems involved in the design and implementation of graphical interfaces for object-oriented database management systems (DBMSs). As a result of this analysis, the dissertation presents directives for the development of database system interfaces. The practical application of these directives was illustrated through the specification and implementation of GOODIES -- a new interface system, which allows browsing and querying DBMSs that support the basic features of the OO model. The design and implementation of this new system is described as a case study of the use of the proposed directives. The system development process was purposely conducted independent from any specific DBMS. Thus, it can be used on top of several OO database systems.},
author = {Juliano Lopes de Oliveira},
date = {1993-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/OliveiraJulianoLopesde-1.pdf},
school = {Instituto de Computação - Unicamp},
title = {A Graphical Tool for Browse and Query in Object-Oriented Databases},
year = {1993}
}
This dissertation analyses the problems involved in the design and implementation of graphical interfaces for object-oriented database management systems (DBMSs). As a result of this analysis, the dissertation presents directives for the development of database system interfaces. The practical application of these directives was illustrated through the specification and implementation of GOODIES -- a new interface system, which allows browsing and querying DBMSs that support the basic features of the OO model. The design and implementation of this new system is described as a case study of the use of the proposed directives. The system development process was purposely conducted independent from any specific DBMS. Thus, it can be used on top of several OO database systems.
|
Lucena, Fabio Nogueira de
User Interface Construction: specifying and implementing dialogue control using statecharts (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1993.
(
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deLucena1993,
author = {Fabio Nogueira de Lucena},
date = {1993-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/LucenaFabioNogueirade.pdf},
school = {Instituto de Computação - Unicamp},
title = {User Interface Construction: specifying and implementing dialogue control using statecharts},
year = {1993}
}
|
Medeiros, C. B.;
Meidanis, J.;
Setubal, J.;
Vossen, G.;
Weske, M.
Supporting Scientific Databases using Object-oriented Systems (conference)
In Proceedings of the Workshop on Information technology -Cooperative Research with Industrial Partners between Germany and Brazil,
50-56,
1993.
(
BibTeX |
Tags:
Conference
)
@conference{Medeiros1993b,
author = {C. B. Medeiros and J. Meidanis and J. Setubal and G. Vossen and M. Weske},
booktitle = {In Proceedings of the Workshop on Information technology -Cooperative Research with Industrial Partners between Germany and Brazil},
date = {1993-01-01},
keyword = {Conference},
number = {50-56},
title = {Supporting Scientific Databases using Object-oriented Systems},
year = {1993}
}
|
Medeiros, Claudia Bauzer;
Jomier, G.
Managing Alternatives and Data Evolution in GIS (conference)
In Proceedings of the ACM/ISCA Workshop on Advances in Geographic Information Systems,
1993.
(
Abstract |
Links |
BibTeX |
Tags:
Conference
)
@conference{Medeiros1993,
abstract = {This paper presents a solution for managing spatio-temporal data in a gis database. This solution allows efficiently storing and handling temporal data and alternatives using a version mechanism. It can be used for different types of gis-based applications, such as urban planning, environmental control and utility management.},
author = {Claudia Bauzer Medeiros and G. Jomier},
booktitle = {In Proceedings of the ACM/ISCA Workshop on Advances in Geographic Information Systems},
date = {1993-01-01},
keyword = {Conference},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/medeiros93managing.pdf},
pages = {36-39},
title = {Managing Alternatives and Data Evolution in GIS},
year = {1993}
}
This paper presents a solution for managing spatio-temporal data in a gis database. This solution allows efficiently storing and handling temporal data and alternatives using a version mechanism. It can be used for different types of gis-based applications, such as urban planning, environmental control and utility management.
|
1992 |
Oliveira, Juliano Lopes de;
Anido, Ricardo
Browsing and Querying Object-Oriented Databases (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
92-12,
1992.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{deOliveira1992,
abstract = {We present a new interface for Object-Oriented Database Management Systems (OODBMSs). The GOODIES system (an acronym for Graphical Object Oriented Database Interface with Extended Synchronism) combines and expands the functions of many existing interface systems, introducing some new concepts for improved browsing in an OODBMS. The implementation of GOODIES proposes a new approach to database interfaces development: instead of being strongly dependent of the underlying DBMS, GOODIES is based on the main features of the object-oriented data model. The system design is based on an internal model and on an external model. The internal model ,defines the relationships that bind the interface to the DBMS. The external model determines the possible interaction between the user and the interface system. This paper describes the concepts of the external model of the GOODIES system.},
author = {Juliano Lopes de Oliveira and Ricardo Anido},
date = {1992-12-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/92-12.pdf},
number = {92-12},
title = {Browsing and Querying Object-Oriented Databases},
type = {Technical Report},
year = {1992}
}
We present a new interface for Object-Oriented Database Management Systems (OODBMSs). The GOODIES system (an acronym for Graphical Object Oriented Database Interface with Extended Synchronism) combines and expands the functions of many existing interface systems, introducing some new concepts for improved browsing in an OODBMS. The implementation of GOODIES proposes a new approach to database interfaces development: instead of being strongly dependent of the underlying DBMS, GOODIES is based on the main features of the object-oriented data model. The system design is based on an internal model and on an external model. The internal model ,defines the relationships that bind the interface to the DBMS. The external model determines the possible interaction between the user and the interface system. This paper describes the concepts of the external model of the GOODIES system.
|
Medeiros, Claudia Bauzer;
Jomier, Geneviéve;
Cellary, W.
Maintaining Integrity Constraints across Versions in a Database (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
92-08,
1992.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Medeiros1992,
abstract = {This paper analyzes the problem of maintaining application-dependent integrity constraints in databases for design enviroments, Such enviroments are characterized by the need to support different types of interaction between integreity maintenance and version maintenance nechanisms. The paper describes these problems, and propose a framework in which they can be treated homogeneously. We thus bridge the gap existing between research on constraint maintenance and on version control, which has so far posed several problems to researchers in these two areas.},
author = {Claudia Bauzer Medeiros and Geneviéve Jomier and W. Cellary},
date = {1992-11-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/92-08.pdf},
number = {92-08},
title = {Maintaining Integrity Constraints across Versions in a Database},
type = {Technical Report},
year = {1992}
}
This paper analyzes the problem of maintaining application-dependent integrity constraints in databases for design enviroments, Such enviroments are characterized by the need to support different types of interaction between integreity maintenance and version maintenance nechanisms. The paper describes these problems, and propose a framework in which they can be treated homogeneously. We thus bridge the gap existing between research on constraint maintenance and on version control, which has so far posed several problems to researchers in these two areas.
|
Medeiros, Claudia Bauzer;
Andrade, Marcia Jacobina
Implementing Integrity Control in Active Databases (Technical Report)
Institute of Computing, State University of Campinas,
Technical Report,
92-06,
1992.
(
Abstract |
Links |
BibTeX |
Tags:
Techreport
)
@techreport{Medeiros1992b,
abstract = {This paper presents an integrity maintenance system that has been developed for maintainning static constrais in databases, using the active database paradigm. This system has beem added to O2 object oriented database system, and is fully funcional. Constraints are specified by the user in a first order logic language, and transformed in production rules, which are stored in the database. The rules are then used to maintain the corresponding set of constraints, for all applications that use the database, and which no longer need to worry about integrity control. We extend previous work on constraint maintenance in two ways: our system can be used as a constraint maintenance layer on top of object-oriented, relational and nested relational databases; in the case of object-oriented systems, we provide constraints support not only in the case of object composition, but also consider inheritance and methods.},
author = {Claudia Bauzer Medeiros and Marcia Jacobina Andrade},
date = {1992-07-01},
institution = {Institute of Computing, State University of Campinas},
keyword = {Techreport},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2014/09/92-06.pdf},
number = {92-06},
title = {Implementing Integrity Control in Active Databases},
type = {Technical Report},
year = {1992}
}
This paper presents an integrity maintenance system that has been developed for maintainning static constrais in databases, using the active database paradigm. This system has beem added to O2 object oriented database system, and is fully funcional. Constraints are specified by the user in a first order logic language, and transformed in production rules, which are stored in the database. The rules are then used to maintain the corresponding set of constraints, for all applications that use the database, and which no longer need to worry about integrity control. We extend previous work on constraint maintenance in two ways: our system can be used as a constraint maintenance layer on top of object-oriented, relational and nested relational databases; in the case of object-oriented systems, we provide constraints support not only in the case of object composition, but also consider inheritance and methods.
|
Andrade, Marcia Jacobina
Integrity Constraints Maintenance in Object-Oriented Databases (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1992.
(
Abstract |
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{Andrade1992,
abstract = {This thesis analyses the problem of static integrity constraints in object oriented database systems, using production rules and the paradigm of active databases. This work shows how to automatically change constraints into production rules, based on information from the constraints and the DBMS schema. The algorithm for rule generation was implemented, and can be used not only for constraint maintenance in object oriented database systems, but also for relational and nested database systems, being of general use. The research developed here includes the specification of a taxonomy for constraints in object oriented systems which considerates their dynamic dimension, and the definition and implementation of a language for constraint specification to facilitate their processing. This work extends proposals of other authors, implementing support for constraints, not only about data, but also about methods.},
author = {Marcia Jacobina Andrade},
date = {1992-03-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/AndradeMarciaJacobinaBrito_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Integrity Constraints Maintenance in Object-Oriented Databases},
year = {1992}
}
This thesis analyses the problem of static integrity constraints in object oriented database systems, using production rules and the paradigm of active databases. This work shows how to automatically change constraints into production rules, based on information from the constraints and the DBMS schema. The algorithm for rule generation was implemented, and can be used not only for constraint maintenance in object oriented database systems, but also for relational and nested database systems, being of general use. The research developed here includes the specification of a taxonomy for constraints in object oriented systems which considerates their dynamic dimension, and the definition and implementation of a language for constraint specification to facilitate their processing. This work extends proposals of other authors, implementing support for constraints, not only about data, but also about methods.
|
1991 |
Oliveira, Hilda Carvalho de
Sistema de operações em álgebra relacional não-normalizada (A system to support operations in non-normalized relational algebra) (mastersthesis)
Instituto de Computação - Unicamp,
mastersthesis,
1991.
(
Links |
BibTeX |
Tags:
Mastersthesis
)
@mastersthesis{deOliveira1991,
author = {Hilda Carvalho de Oliveira},
date = {1991-01-01},
keyword = {Mastersthesis},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/OliveiraHildaCarvalho.pdf},
school = {Instituto de Computação - Unicamp},
title = {Sistema de operações em álgebra relacional não-normalizada (A system to support operations in non-normalized relational algebra)},
year = {1991}
}
|
1990 |
1989 |
Schneider, Henrique Nou
Visões estendidas: uma proposta para extensão de bancos de dados relacionais (mastersthesis)
Instituto de Computação - Unicamp,
Campinas - SP,
mastersthesis,
1989.
(
Abstract |
Links |
BibTeX |
Tags:
database
)
@mastersthesis{Schneider,
abstract = {A tese apresentada uma proposta para estender facilidades de SGBDs relacionais através de mecanismo de atualização de visões. A extensão se baseia na definição de interfaces que permitam o suporte a um novo tipo de visão: as "visões estendidas" (VEs), utilizadas como mecanismo para definir e manipular tipos de dados não suportados pelo SGDB original. As visões são definidas segundo a filosofia de Tipos Abstratos de Dados, onde a função geradora de visão, as operações permitidas sobre a visão e um conjunto de mapeamentos para cada operação estão encapsulados em um único Módulo. Como cada atualização pode ter mais que uma tradução correta, o usuário pode optar por especificar o mapeamento quando da definição da visão, ou no momento em que ele estiver processando atualizações. A validação da proposta é feita através da implementação de um protótipo que permite a definição e manipulação de alguns tipos de VEs.},
address = {Campinas - SP},
author = {Henrique Nou Schneider},
date = {1989-08-21},
keyword = {database},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SchneiderHenriqueNou.pdf},
school = {Instituto de Computação - Unicamp},
title = {Visões estendidas: uma proposta para extensão de bancos de dados relacionais},
year = {1989}
}
A tese apresentada uma proposta para estender facilidades de SGBDs relacionais através de mecanismo de atualização de visões. A extensão se baseia na definição de interfaces que permitam o suporte a um novo tipo de visão: as 'visões estendidas' (VEs), utilizadas como mecanismo para definir e manipular tipos de dados não suportados pelo SGDB original. As visões são definidas segundo a filosofia de Tipos Abstratos de Dados, onde a função geradora de visão, as operações permitidas sobre a visão e um conjunto de mapeamentos para cada operação estão encapsulados em um único Módulo. Como cada atualização pode ter mais que uma tradução correta, o usuário pode optar por especificar o mapeamento quando da definição da visão, ou no momento em que ele estiver processando atualizações. A validação da proposta é feita através da implementação de um protótipo que permite a definição e manipulação de alguns tipos de VEs.
|
D'Oliveira, Liliane Leopoldina
Um sistema de pre-processamento de atualizações em bancos de dados relacionais (mastersthesis)
Instituto de Computação - Unicamp,
Campinas - SP,
mastersthesis,
1989.
(
Abstract |
BibTeX |
Tags:
Banco de dados, Processamento eletronico de dados, Sistema de recuperação da informação
)
@mastersthesis{OliveiraLiliane,
abstract = {A tese apresenta um sistema para condensação e diferimento de atualizações em bancos de dados relacionais. O sistema permite controle de atualizações diferenciais e pode ser utilizado tanto para pré-processamento de atualizações em relações quanto para "refresh" de instantâneos. O sistema implementado pode também ser adaptado para permitir manutenção de cópias e fragmentos de relações em nós de uma rede. A implementação foi feita em PASCAL 3.4, em interação com SGBD relacional RDB, em um sistema VAX 11/785.},
address = {Campinas - SP},
author = {Liliane Leopoldina D'Oliveira},
date = {1989-06-13},
keyword = {Banco de dados, Processamento eletronico de dados, Sistema de recuperação da informação},
school = {Instituto de Computação - Unicamp},
title = {Um sistema de pre-processamento de atualizações em bancos de dados relacionais},
year = {1989}
}
A tese apresenta um sistema para condensação e diferimento de atualizações em bancos de dados relacionais. O sistema permite controle de atualizações diferenciais e pode ser utilizado tanto para pré-processamento de atualizações em relações quanto para 'refresh' de instantâneos. O sistema implementado pode também ser adaptado para permitir manutenção de cópias e fragmentos de relações em nós de uma rede. A implementação foi feita em PASCAL 3.4, em interação com SGBD relacional RDB, em um sistema VAX 11/785.
|
Toledo, Carlos Miguel Tobar
ANA-RE : um metodo para analise e especificação de requisitos (mastersthesis)
Instituto de Computação - Unicamp,
Campinas - SP,
mastersthesis,
1989.
(
Abstract |
Links |
BibTeX |
Tags:
Arquitetura de computador, Computação
)
@mastersthesis{Toledo,
abstract = {Esta dissertação introduz o ANA-RE. um novo método para elaboração de modelos analíticos de problemas, bem como apresenta as caracter1sticas de ambiente automatizado SAES (Sistema de Apoio à Especificação de sistemas) que o suporta. Para mostrar a viabilidade desta automação, a dissertação descreve e desenvolvimento de um protótipo de um sistema de banco de dados que suporta o ANA-R E implementado em SMALLTALK [*].O método ANA-RE é o resultado de estudos para a automação do SADT [+] e visa permitir o desenvolvimento de um ambiente automatizado que suporte e complemente a técnica e os instrumentos descritos na literatura sobre SADT. A motivação é oferecer um apoio para a elaboração de modelos que permitam a análise, definição e comunicação de requisitos de sistemas de software. ANA-RE, além de permitir a sua automação, aumenta o espectro de problemas passiveis de especificação formal comparativamente ao SADT},
address = {Campinas - SP},
author = {Carlos Miguel Tobar Toledo},
date = {1989-06-01},
keyword = {Arquitetura de computador, Computação},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/ToledoCarlosMiguelTobar_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {ANA-RE : um metodo para analise e especificação de requisitos},
year = {1989}
}
Esta dissertação introduz o ANA-RE. um novo método para elaboração de modelos analíticos de problemas, bem como apresenta as caracter1sticas de ambiente automatizado SAES (Sistema de Apoio à Especificação de sistemas) que o suporta. Para mostrar a viabilidade desta automação, a dissertação descreve e desenvolvimento de um protótipo de um sistema de banco de dados que suporta o ANA-R E implementado em SMALLTALK [*].O método ANA-RE é o resultado de estudos para a automação do SADT [+] e visa permitir o desenvolvimento de um ambiente automatizado que suporte e complemente a técnica e os instrumentos descritos na literatura sobre SADT. A motivação é oferecer um apoio para a elaboração de modelos que permitam a análise, definição e comunicação de requisitos de sistemas de software. ANA-RE, além de permitir a sua automação, aumenta o espectro de problemas passiveis de especificação formal comparativamente ao SADT
|
Nogueira, Monica de Lima
Um sistema para manuseio de objetos entidade-relacionamento no modelo relacional (mastersthesis)
Instituto de Computação - Unicamp,
Campinas - SP,
mastersthesis,
1989.
(
Abstract |
Links |
BibTeX |
Tags:
Banco de dados orientado a objetos, Banco de dados relacionais
)
@mastersthesis{Nogueira,
abstract = {This thesis describes the features of an E-R interface to a relacional database - the REVER system. The system allows the user to design the database using only the E-R specification, as well as to query and update it. The user's view is always that of an E-R (extended) diagram, in the design and in the operational phases. The system specified here maps the users requests on the E-R scheme and instances to operations on the corresponding relational database. The thesis also describes the details of developing testing prototype of the interface, thus validating the specification. This prototype allows the generation and manipulation of E-R schemes by means of menus.},
address = {Campinas - SP},
author = {Monica de Lima Nogueira},
date = {1989-03-29},
keyword = {Banco de dados orientado a objetos, Banco de dados relacionais},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/NogueiraMonicadeLima_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Um sistema para manuseio de objetos entidade-relacionamento no modelo relacional},
year = {1989}
}
This thesis describes the features of an E-R interface to a relacional database - the REVER system. The system allows the user to design the database using only the E-R specification, as well as to query and update it. The user's view is always that of an E-R (extended) diagram, in the design and in the operational phases. The system specified here maps the users requests on the E-R scheme and instances to operations on the corresponding relational database. The thesis also describes the details of developing testing prototype of the interface, thus validating the specification. This prototype allows the generation and manipulation of E-R schemes by means of menus.
|
1988 |
Souza, Silvia Maria Fortuna Mendes de
Manipulação de bancos de dados atraves de formularios (mastersthesis)
Instituto de Computação - Unicamp,
Campinas - SP,
mastersthesis,
1988.
(
Abstract |
Links |
BibTeX |
Tags:
Automação, Formularios, Pratica de escritorio, Processamento de dados
)
@mastersthesis{SouzaSilvia,
abstract = {Forms have been a basic tool in the management of offices, ensuring efficiency in the storage and retrieval of the information. One of the advantages of forms-based systems is that they are simple and well accepted by the end users. Also they facilitate the definition of the "standard interfaces", the predefinition of parameters and the integration of ools...Note: The complete abstract is available with the full electronic digital thesis or dissertations.},
address = {Campinas - SP},
author = {Silvia Maria Fortuna Mendes de Souza},
date = {1988-09-14},
keyword = {Automação, Formularios, Pratica de escritorio, Processamento de dados},
link = {http://www.lis.ic.unicamp.br/wp-content/uploads/2015/09/SouzaSilviaMariaFortunaMendes_M.pdf},
school = {Instituto de Computação - Unicamp},
title = {Manipulação de bancos de dados atraves de formularios},
year = {1988}
}
Forms have been a basic tool in the management of offices, ensuring efficiency in the storage and retrieval of the information. One of the advantages of forms-based systems is that they are simple and well accepted by the end users. Also they facilitate the definition of the 'standard interfaces', the predefinition of parameters and the integration of ools...Note: The complete abstract is available with the full electronic digital thesis or dissertations.
|