Science Publishing Group: American Journal of Software Engineering and Applications: Table of Contents
<i> American Journal of Software Engineering and Applications (AJSEA) </i> focuses on theories, methods, and applications in software. The scope of this Journal ranges from the mechanisms through the development of principles to the application of those principles to specific environments. It provide a high profile, leading edge forum for academic researchers, industrial professionals, engineers, consultants, managers, educators and policy makers working in the field to contribute and disseminate innovative new work on software.
http://www.sciencepublishinggroup.com/j/ajsea Science Publishing Group: American Journal of Software Engineering and Applications: Table of Contents
Science Publishing Group
en-US
American Journal of Software Engineering and Applications
American Journal of Software Engineering and Applications
http://image.sciencepublishinggroup.com/journal/137.gif
http://www.sciencepublishinggroup.com/j/ajsea
Context-based Web Service Discovery Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20120101.11
Web services offer a vast number of interoperable programs using a basic (syntax) method to discover services. The problem of web services is how to develop mechanisms to locate automatically the correct Web service in order to meet the user’s requirements, that is appointed by the discovery of web services. Indeed, it is beyond the human's capability to manually analyze web services functionalities. This paper proposes an architectural model to assist the user by taking into account its constantly changing context. This model uses the ontologies and RFD language to describe semantically and formally the resources and their meta-data. Therefore, this model selects services based on the query semantics, which consist of preferences and context. These preferences may be digital, for example the price of a ticket when booking a flight or QoS desired.
Web services offer a vast number of interoperable programs using a basic (syntax) method to discover services. The problem of web services is how to develop mechanisms to locate automatically the correct Web service in order to meet the user’s requirements, that is appointed by the discovery of web services. Indeed, it is beyond the human's capability to manually analyze web services functionalities. This paper proposes an architectural model to assist the user by taking into account its constantly changing context. This model uses the ontologies and RFD language to describe semantically and formally the resources and their meta-data. Therefore, this model selects services based on the query semantics, which consist of preferences and context. These preferences may be digital, for example the price of a ticket when booking a flight or QoS desired.
Context-based Web Service Discovery Model
doi:10.11648/j.ajsea.20120101.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Hamid Mcheick
Amel Hannech
Mehdi Adda
Context-based Web Service Discovery Model
1
1
9
9
2014-01-01
2014-01-01
10.11648/j.ajsea.20120101.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20120101.11
© Science Publishing Group
A Framework for Evaluating Model-driven Architecture
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20120101.12
In the last few years, Model Driven Development (MDD) has become an interesting alternative for designing the self-adaptive software systems. In general, the ultimate goal of this technology is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of model-driven methodologies shows them all to include the principle of the separation of concerns as a key factor for obtaining high-quality and self-adaptable software systems. Each methodology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven architecture in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analyzing and evaluating the features of those development approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the proposed model-driven approaches in the related works, and emphasize on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modeling languages and modeling tools to the software developers.
In the last few years, Model Driven Development (MDD) has become an interesting alternative for designing the self-adaptive software systems. In general, the ultimate goal of this technology is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of model-driven methodologies shows them all to include the principle of the separation of concerns as a key factor for obtaining high-quality and self-adaptable software systems. Each methodology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven architecture in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analyzing and evaluating the features of those development approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the proposed model-driven approaches in the related works, and emphasize on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modeling languages and modeling tools to the software developers.
A Framework for Evaluating Model-driven Architecture
doi:10.11648/j.ajsea.20120101.12
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Basel Magableh
Butheyna Rawashdeh
Stephen Barrett
A Framework for Evaluating Model-driven Architecture
1
1
22
22
2014-01-01
2014-01-01
10.11648/j.ajsea.20120101.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20120101.12
© Science Publishing Group
An Efficient Fingerprint Image Thinning Algorithm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.11
Most fingerprint recognition applications rely heavily on efficient and fast image enhancement algorithms. Image thinning is a very important stage of image enhancement. A good thinning algorithm preserves the structure of the original fingerprint image, reduces the amount of data needed to process and helps improve the feature extraction accuracy and efficiency. In this paper we describe and compare some of the most used fingerprint thinning algorithms. Results show that faster algorithms have difficulty preserving connectivity. Zhang and Suen’s algorithm gives the least processing time, while Guo and Hall’s algorithm produces the best skeleton quality. A modified Zhang and Suen’s algorithm is proposed, that is efficient and fast, and better preserves structure and connectivity.
Most fingerprint recognition applications rely heavily on efficient and fast image enhancement algorithms. Image thinning is a very important stage of image enhancement. A good thinning algorithm preserves the structure of the original fingerprint image, reduces the amount of data needed to process and helps improve the feature extraction accuracy and efficiency. In this paper we describe and compare some of the most used fingerprint thinning algorithms. Results show that faster algorithms have difficulty preserving connectivity. Zhang and Suen’s algorithm gives the least processing time, while Guo and Hall’s algorithm produces the best skeleton quality. A modified Zhang and Suen’s algorithm is proposed, that is efficient and fast, and better preserves structure and connectivity.
An Efficient Fingerprint Image Thinning Algorithm
doi:10.11648/j.ajsea.20130201.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Davit Kocharyan
An Efficient Fingerprint Image Thinning Algorithm
2
1
6
6
2014-01-01
2014-01-01
10.11648/j.ajsea.20130201.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.11
© Science Publishing Group
An approach to Virtual Laboratory Design and Testing
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.14
Laboratory experiments and research are important parts in natural science education. They supplement the theoretical learning material and contribute to deeper learn of a subject. The realization of such activities requires an ap-propriate laboratory equipment and reagents that are often either inaccessible or incomplete. Virtual labs solve this prob-lem and provide the performing of the same experiment repeatedly without any restriction. An interactive laboratory envi-ronment engages pupils in active learning to enhance their understanding of processes and practical skills and promotes a successful e-learning strategy. Virtual Lab includes a lot of embedded experiments that the student must perform via cer-tain scenarios. In the paper an approach to design of laboratory experiments for virtual lab environment and their scenarios implementation testing is suggested. Experiments design patterns are based on finite-state automaton model. The object-oriented approach for virtual experiment implementation is provided. For testing pattern a methodology of class testing is used. The suggested approaches are realized in the presented virtual laboratory environments for Chemistry and Biology that have been developed to support laboratory study in Armenian schools, colleges, and universities. These methods will be used in long-term research activity in the field of creation of virtual laboratories on different disciplines: organic and inorganic chemistry, physics, and biology as well as during developing others virtual laboratories.
Laboratory experiments and research are important parts in natural science education. They supplement the theoretical learning material and contribute to deeper learn of a subject. The realization of such activities requires an ap-propriate laboratory equipment and reagents that are often either inaccessible or incomplete. Virtual labs solve this prob-lem and provide the performing of the same experiment repeatedly without any restriction. An interactive laboratory envi-ronment engages pupils in active learning to enhance their understanding of processes and practical skills and promotes a successful e-learning strategy. Virtual Lab includes a lot of embedded experiments that the student must perform via cer-tain scenarios. In the paper an approach to design of laboratory experiments for virtual lab environment and their scenarios implementation testing is suggested. Experiments design patterns are based on finite-state automaton model. The object-oriented approach for virtual experiment implementation is provided. For testing pattern a methodology of class testing is used. The suggested approaches are realized in the presented virtual laboratory environments for Chemistry and Biology that have been developed to support laboratory study in Armenian schools, colleges, and universities. These methods will be used in long-term research activity in the field of creation of virtual laboratories on different disciplines: organic and inorganic chemistry, physics, and biology as well as during developing others virtual laboratories.
An approach to Virtual Laboratory Design and Testing
doi:10.11648/j.ajsea.20130201.14
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
A. Hovakimyan
S. Sargsyan
N. Ispiryan
L. Khachoyan
K. Darbinyan
An approach to Virtual Laboratory Design and Testing
2
1
23
23
2014-01-01
2014-01-01
10.11648/j.ajsea.20130201.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.14
© Science Publishing Group
R Language in Data Mining Techniques and Statistics
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.12
Data mining is a set of techniques and methods relating to the extraction of knowledge from large amounts of data (through automatic or semi-automatic methods) and further scientific, industrial or operational use of that knowledge. Data mining is closely related to the statistics as an applied mathematical discipline with an analysis of data that could be defined as the extraction of useful information from data.The only difference between the two disciplines is that data mining is a new discipline that is related to significant or large data sets. R is an object-oriented programming language. This means that everything what is done with R can be saved as an object. Every object has a class. It describes what the object contains and what each function does. Application of R as a programming language and statistical software is much more than a supplement to Stata, SAS, and SPSS. Although it is more difficult to learn, the biggest advantage of R is its free-of-charge feature and the wealth of specialized application packages and libraries for a huge number of statistical, mathematical and other methods. R is a simple, but very powerful data mining and statistical data processing tool and once "discovered", it provides users with an entirely new, rich and powerful tool applicable in almost every field of research
Data mining is a set of techniques and methods relating to the extraction of knowledge from large amounts of data (through automatic or semi-automatic methods) and further scientific, industrial or operational use of that knowledge. Data mining is closely related to the statistics as an applied mathematical discipline with an analysis of data that could be defined as the extraction of useful information from data.The only difference between the two disciplines is that data mining is a new discipline that is related to significant or large data sets. R is an object-oriented programming language. This means that everything what is done with R can be saved as an object. Every object has a class. It describes what the object contains and what each function does. Application of R as a programming language and statistical software is much more than a supplement to Stata, SAS, and SPSS. Although it is more difficult to learn, the biggest advantage of R is its free-of-charge feature and the wealth of specialized application packages and libraries for a huge number of statistical, mathematical and other methods. R is a simple, but very powerful data mining and statistical data processing tool and once "discovered", it provides users with an entirely new, rich and powerful tool applicable in almost every field of research
R Language in Data Mining Techniques and Statistics
doi:10.11648/j.ajsea.20130201.12
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Sonja Pravilovic
R Language in Data Mining Techniques and Statistics
2
1
12
12
2014-01-01
2014-01-01
10.11648/j.ajsea.20130201.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.12
© Science Publishing Group
Generic Object Recognition Using Graph Embedding into A Vector Space
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.13
This paper describes a method for generic object recognition using graph structural expression. In recent years, generic object recognition by computer is finding extensive use in a variety of fields, including robotic vision and image retrieval. Conventional methods use a bag-of-features (BoF) approach, which expresses the image as an appearance fre-quency histogram of visual words by quantizing SIFT (Scale-Invariant Feature Transform) features. However, there is a problem associated with this approach, namely that the location information and the relationship between keypoints (both of which are important as structural information) are lost. To deal with this problem, in the proposed method, the graph is constructed by connecting SIFT keypoints with lines. As a result, the keypoints maintain their relationship, and then structural representation with location information is achieved. Since graph representation is not suitable for statistical work, the graph is embedded into a vector space according to the graph edit distance. The experiment results on two image datasets of multi-class showed that the proposed method improved the recognition rate.
This paper describes a method for generic object recognition using graph structural expression. In recent years, generic object recognition by computer is finding extensive use in a variety of fields, including robotic vision and image retrieval. Conventional methods use a bag-of-features (BoF) approach, which expresses the image as an appearance fre-quency histogram of visual words by quantizing SIFT (Scale-Invariant Feature Transform) features. However, there is a problem associated with this approach, namely that the location information and the relationship between keypoints (both of which are important as structural information) are lost. To deal with this problem, in the proposed method, the graph is constructed by connecting SIFT keypoints with lines. As a result, the keypoints maintain their relationship, and then structural representation with location information is achieved. Since graph representation is not suitable for statistical work, the graph is embedded into a vector space according to the graph edit distance. The experiment results on two image datasets of multi-class showed that the proposed method improved the recognition rate.
Generic Object Recognition Using Graph Embedding into A Vector Space
doi:10.11648/j.ajsea.20130201.13
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Takahiro Hori
Tetsuya Takiguchi
Yasuo Ariki
Generic Object Recognition Using Graph Embedding into A Vector Space
2
1
18
18
2014-01-01
2014-01-01
10.11648/j.ajsea.20130201.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130201.13
© Science Publishing Group
A Metric Based Approach for Analysis of Software Development Processes in Open Source Environment
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.16
Open source software (OSS) is a software program whose source code is available to anyone under a license which gives them freedom to run the program, to study, modify and redistribute the copies of original or modified program. Its objective is to encourage the involvement in the form of improvement, modification and distribution of the licensed work. OSS proved itself highly suited, both as a software product and as a development methodology. The main challenge in the open source software development (OSSD) is to collect and extract data. This paper presents various aspects of open source software community, role of different types of users as well as developers. A metric-based approach for analysis of software development processes in open source environment is suggested and validated through a case study by studying the various development processes undertaken by developers for about fifty different open – source software’s.
Open source software (OSS) is a software program whose source code is available to anyone under a license which gives them freedom to run the program, to study, modify and redistribute the copies of original or modified program. Its objective is to encourage the involvement in the form of improvement, modification and distribution of the licensed work. OSS proved itself highly suited, both as a software product and as a development methodology. The main challenge in the open source software development (OSSD) is to collect and extract data. This paper presents various aspects of open source software community, role of different types of users as well as developers. A metric-based approach for analysis of software development processes in open source environment is suggested and validated through a case study by studying the various development processes undertaken by developers for about fifty different open – source software’s.
A Metric Based Approach for Analysis of Software Development Processes in Open Source Environment
doi:10.11648/j.ajsea.20130202.16
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Parminder Kaur
Hardeep Singh
A Metric Based Approach for Analysis of Software Development Processes in Open Source Environment
2
2
79
79
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.16
© Science Publishing Group
The Cognitive Programming Paradigm the Next Programming Struture
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.15
The development of computer programming started with the development of the switching logic, because, the computer hardware is made up of millions of digital switches. The activation and deactivation of these switches are through codified instructions – (program) which trigger the switches to function. The computer programming languages have gone through a revolution, from the machine code language, through assembly mnemonics to the high level programming languages like FORTRAN, ALGOL, COBOL, LISP, BASIC, ADA and C/C++. It is a fact that, these programming languages are not the exact codes that microprocessors do understand and work with, because through compiler and interpreter programs, these high level programming languages that are easily understood by people are converted to machine code languages for the microprocessor to understand and do the work human knowledge has instructed it to do. The various programming languages stem from the difficulties man has in using one programming language to solve different problems on the computer. Hence, for mathematical and trigonometrically problems, FORTRAN is the best, for business problems, COBOL is the right language, whilst for computer games and designs, BASIC language is the solution. The trend of using individual programming languages to solve specific problems by single processor computers have changed drastically, from single core processors to present day dual and multi-core processors. The main target of engineers and scientists is to reach a stage that the computer can think like the human brain. The human brain contains many cognitive (thinking) modules that work in parallel to produce a unique result. With the presence of multi-core processors, why should computers continue to draw summaries from stored databases, and allow us to sit hours to analyse these results to find solutions to problems? The subject of ‘Cognitive Programming Paradigm’, analyses the various programming structures and came out that these programming structures are performing similar tasks of processing stored databases and producing summarized information. These summarized information are not final, business managers and Executives have to sit hours to deliberate on what strategic decisions to take. Again, present day computers cannot solve problems holistically, as normally appear to human beings. Hence, there’s the need for these programming structures be grouped together to solve human problems holistically, like the human brains processing complex problems holistically. With the presence of multi-core processors, its possible to structure programming such that these programming structures could be run in parallel to solve a specific problem completely, i.e. be able to analyse which programming structure will be suitable for a particular problem solving or be able to store first solution and compare with new solutions of a problem to arrive at a strategic decision than its being done at present. This approach could lift the burden on Managers and Executives in deliberating further on results of a processed business problem.
The development of computer programming started with the development of the switching logic, because, the computer hardware is made up of millions of digital switches. The activation and deactivation of these switches are through codified instructions – (program) which trigger the switches to function. The computer programming languages have gone through a revolution, from the machine code language, through assembly mnemonics to the high level programming languages like FORTRAN, ALGOL, COBOL, LISP, BASIC, ADA and C/C++. It is a fact that, these programming languages are not the exact codes that microprocessors do understand and work with, because through compiler and interpreter programs, these high level programming languages that are easily understood by people are converted to machine code languages for the microprocessor to understand and do the work human knowledge has instructed it to do. The various programming languages stem from the difficulties man has in using one programming language to solve different problems on the computer. Hence, for mathematical and trigonometrically problems, FORTRAN is the best, for business problems, COBOL is the right language, whilst for computer games and designs, BASIC language is the solution. The trend of using individual programming languages to solve specific problems by single processor computers have changed drastically, from single core processors to present day dual and multi-core processors. The main target of engineers and scientists is to reach a stage that the computer can think like the human brain. The human brain contains many cognitive (thinking) modules that work in parallel to produce a unique result. With the presence of multi-core processors, why should computers continue to draw summaries from stored databases, and allow us to sit hours to analyse these results to find solutions to problems? The subject of ‘Cognitive Programming Paradigm’, analyses the various programming structures and came out that these programming structures are performing similar tasks of processing stored databases and producing summarized information. These summarized information are not final, business managers and Executives have to sit hours to deliberate on what strategic decisions to take. Again, present day computers cannot solve problems holistically, as normally appear to human beings. Hence, there’s the need for these programming structures be grouped together to solve human problems holistically, like the human brains processing complex problems holistically. With the presence of multi-core processors, its possible to structure programming such that these programming structures could be run in parallel to solve a specific problem completely, i.e. be able to analyse which programming structure will be suitable for a particular problem solving or be able to store first solution and compare with new solutions of a problem to arrive at a strategic decision than its being done at present. This approach could lift the burden on Managers and Executives in deliberating further on results of a processed business problem.
The Cognitive Programming Paradigm the Next Programming Struture
doi:10.11648/j.ajsea.20130202.15
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Benjamin Odei Bempong
The Cognitive Programming Paradigm the Next Programming Struture
2
2
67
67
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.15
© Science Publishing Group
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part II
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.13
Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part II
doi:10.11648/j.ajsea.20130202.13
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
John Charlery
Chris D. Smith
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part II
2
2
48
48
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.13
© Science Publishing Group
Analogy-Based Software Quality Prediction with Project Feature Weights
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.14
This paper presents analogy-based software quality estimation with project feature weights. The objective of this research is to predict the quality of project accurately and use the results in future predictions. The focus includes identifying parameters on which the quality of software depends. Estimation of rate of improvement of software quality chiefly depends on the development time. Assigning weights to these parameters to improve upon the results is also in the area of interest. In this paper two different similarity measures namely, Euclidian and Manhattan were the measures used for retrieving the matching cases from the knowledgebase to increases estimation accuracy & reliability. Expert judgment, weights and rating levels were used to assign weights and quality rating levels. The results show that assigning weights to software metrics increases the prediction performance considerably. In order to obtain the results, we have used indigenous tools.
This paper presents analogy-based software quality estimation with project feature weights. The objective of this research is to predict the quality of project accurately and use the results in future predictions. The focus includes identifying parameters on which the quality of software depends. Estimation of rate of improvement of software quality chiefly depends on the development time. Assigning weights to these parameters to improve upon the results is also in the area of interest. In this paper two different similarity measures namely, Euclidian and Manhattan were the measures used for retrieving the matching cases from the knowledgebase to increases estimation accuracy & reliability. Expert judgment, weights and rating levels were used to assign weights and quality rating levels. The results show that assigning weights to software metrics increases the prediction performance considerably. In order to obtain the results, we have used indigenous tools.
Analogy-Based Software Quality Prediction with Project Feature Weights
doi:10.11648/j.ajsea.20130202.14
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Ekbal Rashid
Srikanta Patnaik
Vandana Bhattacharya
Analogy-Based Software Quality Prediction with Project Feature Weights
2
2
53
53
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.14
© Science Publishing Group
Using the Semantic Web Services to Build a Virtual Medical Analysis Laboratory
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.17
In medical analysis field, patients often must visit a multitude of laboratories related web sites in order to check availability, booking, prices, result duration, and find the nearest laboratory. Thus, these varieties of reasons to visit the web sites make limitations on the usability of them. However, to overcome these limitations, this paper proposes a Virtual Medical Analysis Laboratory (VMAL) prototype system which will be based on applying the Semantic Web Services (SWSs) for scheduling outpatient tests in order to discover the suitable laboratory. Furthermore, the proposed prototype will also be based on the Web Service Modeling Ontology (WSMO)
In medical analysis field, patients often must visit a multitude of laboratories related web sites in order to check availability, booking, prices, result duration, and find the nearest laboratory. Thus, these varieties of reasons to visit the web sites make limitations on the usability of them. However, to overcome these limitations, this paper proposes a Virtual Medical Analysis Laboratory (VMAL) prototype system which will be based on applying the Semantic Web Services (SWSs) for scheduling outpatient tests in order to discover the suitable laboratory. Furthermore, the proposed prototype will also be based on the Web Service Modeling Ontology (WSMO)
Using the Semantic Web Services to Build a Virtual Medical Analysis Laboratory
doi:10.11648/j.ajsea.20130202.17
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Houda El Bouhissi
Mimoun Malki
Djamila Berramdane
Rafa E. Al-Qutaish
Using the Semantic Web Services to Build a Virtual Medical Analysis Laboratory
2
2
85
85
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.17
© Science Publishing Group
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part I
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.12
Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part I
doi:10.11648/j.ajsea.20130202.12
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
John Charlery
Chris D. Smith
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part I
2
2
39
39
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.12
© Science Publishing Group
Intelligent Assessment and Prediction of Software Characteristics at the Design Stage
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.11
This article is dedicated to intelligent method and system of design results evaluation and software characteristics prediction on the basis of processing of software metrics sets.
This article is dedicated to intelligent method and system of design results evaluation and software characteristics prediction on the basis of processing of software metrics sets.
Intelligent Assessment and Prediction of Software Characteristics at the Design Stage
doi:10.11648/j.ajsea.20130202.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Oksana Pomorova
Tetyana Hovorushchenko
Intelligent Assessment and Prediction of Software Characteristics at the Design Stage
2
2
31
31
2014-01-01
2014-01-01
10.11648/j.ajsea.20130202.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130202.11
© Science Publishing Group
A Discussion of Software Reliability Growth Models with Time-Varying Learning Effects
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130203.12
Over the last few decades, software reliability growth models (SRGM) has been developed to predict software reliability in the testing/debugging phase. Most of the models are based on the Non-Homogeneous Poisson Process (NHPP), and an S or exponential-shaped type of testing behavior is usually assumed. Chiu et al. (2008) provided an SRGM that considers learning effects, which is able to reasonably describe the S and exponential-shaped behaviors simultaneously. This paper considers both linear and exponential-learning effects in an SRGM to enhance the model in Chiu et al. (2008), assumes the learning effects depend on the testing-time, and discusses when and what learning effects would occur in the software development process. This research also verifies the effectiveness of the proposed models with R square (Rsq), and compares the results with these of other models by using four real datasets. The proposed models consider constant, linear, and exponential-learning effects simultaneously. The results reveal the proposed models fit the data better than other models, and that the learning effects occur in the software testing process. The results are helpful for the software testing/debugging managers to master the schedule of the projects, the performance of the programmers, and the reliability of the software system.
Over the last few decades, software reliability growth models (SRGM) has been developed to predict software reliability in the testing/debugging phase. Most of the models are based on the Non-Homogeneous Poisson Process (NHPP), and an S or exponential-shaped type of testing behavior is usually assumed. Chiu et al. (2008) provided an SRGM that considers learning effects, which is able to reasonably describe the S and exponential-shaped behaviors simultaneously. This paper considers both linear and exponential-learning effects in an SRGM to enhance the model in Chiu et al. (2008), assumes the learning effects depend on the testing-time, and discusses when and what learning effects would occur in the software development process. This research also verifies the effectiveness of the proposed models with R square (Rsq), and compares the results with these of other models by using four real datasets. The proposed models consider constant, linear, and exponential-learning effects simultaneously. The results reveal the proposed models fit the data better than other models, and that the learning effects occur in the software testing process. The results are helpful for the software testing/debugging managers to master the schedule of the projects, the performance of the programmers, and the reliability of the software system.
A Discussion of Software Reliability Growth Models with Time-Varying Learning Effects
doi:10.11648/j.ajsea.20130203.12
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Chiu, Kuei-Chen
A Discussion of Software Reliability Growth Models with Time-Varying Learning Effects
2
3
104
104
2014-01-01
2014-01-01
10.11648/j.ajsea.20130203.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130203.12
© Science Publishing Group
Supporting Engineering Design Modeling by Domain Specific Modeling Languag
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130203.11
Domain specific modeling methodology employed in this solution provides abstractions in the problem domain that expresses designs in terms of concepts in the application domain. Presented in this paper therefore is a metamodelling tool, an integrated platform which offers layered collections of reusable software primitives whose semantics are familiar only to engineering design mechanisms. It is intended to eliminate the complexities associated with the domain of computing technologies such as CAD systems where the focus is solely on engineering designs expertise in the software systems logic. This tool which was built on the DSL processor engine that compiles the DSL Builder files at the core will enable non design experts to be able to evolve designs specific to their domains of operations and reflecting their view points. At the Development Interface, the templates are created for every transformation added to our model that can be applicable in the physical design of objects in the engineering industry. It will in line remove hassles and complexities of expertise centric design platforms to produce artifacts that will help engineers manage very complex design concepts.
Domain specific modeling methodology employed in this solution provides abstractions in the problem domain that expresses designs in terms of concepts in the application domain. Presented in this paper therefore is a metamodelling tool, an integrated platform which offers layered collections of reusable software primitives whose semantics are familiar only to engineering design mechanisms. It is intended to eliminate the complexities associated with the domain of computing technologies such as CAD systems where the focus is solely on engineering designs expertise in the software systems logic. This tool which was built on the DSL processor engine that compiles the DSL Builder files at the core will enable non design experts to be able to evolve designs specific to their domains of operations and reflecting their view points. At the Development Interface, the templates are created for every transformation added to our model that can be applicable in the physical design of objects in the engineering industry. It will in line remove hassles and complexities of expertise centric design platforms to produce artifacts that will help engineers manage very complex design concepts.
Supporting Engineering Design Modeling by Domain Specific Modeling Languag
doi:10.11648/j.ajsea.20130203.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Japheth Bunakiye. Richard.
Ogheneovo Edward. Erhieyovwe.
Supporting Engineering Design Modeling by Domain Specific Modeling Languag
2
3
91
91
2014-01-01
2014-01-01
10.11648/j.ajsea.20130203.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130203.11
© Science Publishing Group
A Cohesion Measure for C in the Context of an AOP Paradigm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130204.11
Cohesion measures the relative functional strength of a module and impacts the internal attribute of a function such as modularity. Modularity has become an accepted approach in every engineering discipline. The concept of modular design has considerably reduced the complexity of software design. It represents the strength of bond between the internal elements of the modules. To achieve effective modularity, design concepts like functional independence are considered to be very important. Aspect-oriented software development (AOSD) has emerged over the last decade as a paradigm for separation of concerns, which aims to increase the modularity. Therefore the presence of aspects affects the cohesiveness of a module. Like any new technology, aspect-oriented programming (AOP) was introduced to solve problems related to object-orientation (OO), and more in particular Java .It was noticed that AOP’s ideas were not necessarily tied to OO (and Java) but also to less modular paradigm like imperative programming. Moreover, several metrics have been proposed to assess aspect-oriented systems quality attributes in an object oriented context. However, not much work has been done to assess the impact of AOP on imperative style of programming (also called procedural paradigm, such as C language). Therefore, metrics are required to measure quality attributes for AOP used with imperative programming. Cohesion is considered an important software quality attribute. In this context, this paper presents an approach for measuring cohesion based on dependence analysis using control flow graphs (CFG).
Cohesion measures the relative functional strength of a module and impacts the internal attribute of a function such as modularity. Modularity has become an accepted approach in every engineering discipline. The concept of modular design has considerably reduced the complexity of software design. It represents the strength of bond between the internal elements of the modules. To achieve effective modularity, design concepts like functional independence are considered to be very important. Aspect-oriented software development (AOSD) has emerged over the last decade as a paradigm for separation of concerns, which aims to increase the modularity. Therefore the presence of aspects affects the cohesiveness of a module. Like any new technology, aspect-oriented programming (AOP) was introduced to solve problems related to object-orientation (OO), and more in particular Java .It was noticed that AOP’s ideas were not necessarily tied to OO (and Java) but also to less modular paradigm like imperative programming. Moreover, several metrics have been proposed to assess aspect-oriented systems quality attributes in an object oriented context. However, not much work has been done to assess the impact of AOP on imperative style of programming (also called procedural paradigm, such as C language). Therefore, metrics are required to measure quality attributes for AOP used with imperative programming. Cohesion is considered an important software quality attribute. In this context, this paper presents an approach for measuring cohesion based on dependence analysis using control flow graphs (CFG).
A Cohesion Measure for C in the Context of an AOP Paradigm
doi:10.11648/j.ajsea.20130204.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Zeba Khanam
S. A. M Rizvi
A Cohesion Measure for C in the Context of an AOP Paradigm
2
4
110
110
2014-01-01
2014-01-01
10.11648/j.ajsea.20130204.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130204.11
© Science Publishing Group
A Systematic Review of Fault Tolerance in Mobile Agents
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130205.11
Mobile agents have engrossed substantial attention in recent years, especially in fault tolerance researches and several approaches have emerged. Fault tolerance design tends to put a stop to incomplete or complete loss of the agent in the face of failures. Despite these developments, reliability issues still remain a critical challenge. Moreover, there is no comprehensive detail bringing together, summaries of the existing efforts of researches in order to focus attention where it is needed most. Therefore, our objective in this systematic literature review (SR) is to explore and analyze the existing fault tolerance implementations in order to bring about the state-of-the-art and the challenges in mobile agent’s fault tolerance approaches. We used studies from a number of relevant article sources, and our results showed the existence of twenty six articles. Our analysis indicates that the existing approaches are not generic and each focuses on a specific aspect of the problem, usually in one or two specific fault models which impacts on agent’s reliability. The implication of the study is to give a clear direction to future researchers in this area for a better reliable and transparent fault tolerance in mobile agents.
Mobile agents have engrossed substantial attention in recent years, especially in fault tolerance researches and several approaches have emerged. Fault tolerance design tends to put a stop to incomplete or complete loss of the agent in the face of failures. Despite these developments, reliability issues still remain a critical challenge. Moreover, there is no comprehensive detail bringing together, summaries of the existing efforts of researches in order to focus attention where it is needed most. Therefore, our objective in this systematic literature review (SR) is to explore and analyze the existing fault tolerance implementations in order to bring about the state-of-the-art and the challenges in mobile agent’s fault tolerance approaches. We used studies from a number of relevant article sources, and our results showed the existence of twenty six articles. Our analysis indicates that the existing approaches are not generic and each focuses on a specific aspect of the problem, usually in one or two specific fault models which impacts on agent’s reliability. The implication of the study is to give a clear direction to future researchers in this area for a better reliable and transparent fault tolerance in mobile agents.
A Systematic Review of Fault Tolerance in Mobile Agents
doi:10.11648/j.ajsea.20130205.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Bassey Echeng Isong
Eyaye Bekele
A Systematic Review of Fault Tolerance in Mobile Agents
2
5
124
124
2014-01-01
2014-01-01
10.11648/j.ajsea.20130205.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130205.11
© Science Publishing Group
Simulation of Traffic Lights for Green Wave and Dynamically Change of Signal
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.11
In this study a traffic light system has been considered and simulated on Matlab to create hierarchical and logical model. This model is designed over five junctions to solve traffic jam in big cities by simulation of continuous flow of traffic lights. This simulation includes Green Wave flow and dynamic change of traffic lights due to change traffic volume. The simulation secures the continuous traffic flow by updating the light time for providing green wave flow.
In this study a traffic light system has been considered and simulated on Matlab to create hierarchical and logical model. This model is designed over five junctions to solve traffic jam in big cities by simulation of continuous flow of traffic lights. This simulation includes Green Wave flow and dynamic change of traffic lights due to change traffic volume. The simulation secures the continuous traffic flow by updating the light time for providing green wave flow.
Simulation of Traffic Lights for Green Wave and Dynamically Change of Signal
doi:10.11648/j.ajsea.20130206.11
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Güney GORGUN
Ibrahim Halil GUZELBEY
Simulation of Traffic Lights for Green Wave and Dynamically Change of Signal
2
6
132
132
2014-01-01
2014-01-01
10.11648/j.ajsea.20130206.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.11
© Science Publishing Group
Optimal Performance Model Investigation in Component-Based Software Engineering (CBSE)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.13
Commercial off-the-shelf (COTS) technologies have emerged over the past decade. COTS technology gained significant popularity by developing the optimal, efficient, economically and quickly software system that mapping business requirement. As a consequence, the need for designing effective strategies for enabling large scale reuse, whilst overcoming the risks involved in the use of a particular technology, still remains. In this situation, the use of “COTS” technology introduces many problematic factors that still have not been fully solved; some of them are the lack of inclusive tools, efficient methods to manage and collect the required information for supporting COTS software selection. Keeping in view all these issues in this research report present an Optimal Performance Model (OPM) for gathering the information that is needed to define COTS market segments in a way that would make software components selection more effective and efficient. Mostly the information we collect possess huge diversity therefore suggest OPM’s that will certainly help to cover different aspects and fields of COTS software selection. This design model will base on several software quality standards. Commercial of the shelf software has gained considerable popularity as approach that quickly and economical creates software system that address business requirement. This research work will presents an approach for defining assessment principles for reusable software components.
Commercial off-the-shelf (COTS) technologies have emerged over the past decade. COTS technology gained significant popularity by developing the optimal, efficient, economically and quickly software system that mapping business requirement. As a consequence, the need for designing effective strategies for enabling large scale reuse, whilst overcoming the risks involved in the use of a particular technology, still remains. In this situation, the use of “COTS” technology introduces many problematic factors that still have not been fully solved; some of them are the lack of inclusive tools, efficient methods to manage and collect the required information for supporting COTS software selection. Keeping in view all these issues in this research report present an Optimal Performance Model (OPM) for gathering the information that is needed to define COTS market segments in a way that would make software components selection more effective and efficient. Mostly the information we collect possess huge diversity therefore suggest OPM’s that will certainly help to cover different aspects and fields of COTS software selection. This design model will base on several software quality standards. Commercial of the shelf software has gained considerable popularity as approach that quickly and economical creates software system that address business requirement. This research work will presents an approach for defining assessment principles for reusable software components.
Optimal Performance Model Investigation in Component-Based Software Engineering (CBSE)
doi:10.11648/j.ajsea.20130206.13
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Muhammad Osama Khan
Ahmed Mateen
Ahsan Raza Sattar
Optimal Performance Model Investigation in Component-Based Software Engineering (CBSE)
2
6
149
149
2014-01-01
2014-01-01
10.11648/j.ajsea.20130206.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.13
© Science Publishing Group
Extended Implementation of Change Impact Analysis Model-Based Framework to Enhance Predicting the Effect of a Change of Service in a Grid Environment
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.12
Continuous monitoring of changes to utility services and products in a distributed information system is an interesting issue in software engineering. These changes affect the semantics and structural complexity of the system, as a change to one part will in most cases, result in changes to other parts. Therefore, in design and redesign for customization, predicting this change presents a significant challenge. Changes are intended to fix faults, improve or update products and services. Lack of validated, widely accepted, and adopted tools for planning, estimating, and performing maintenance contributes to the problem. One effective way of assessing changeability effect is to assess the impact of changes through a well validated model and framework. This research paper is an extended report on the implementation of a change propagation framework, together with it’s associated change impact analysis factor adaptation model, and a fault and failure assumption model to predict the effect of a change of a service in a grid environment. While implementing the framework, data was collected for three hypothetical years, thus helping to predict the next two (2) years consecutively. Significant results corresponding to the impact analysis factor were obtained showing the viable practicality of the use of Bayesian statistics (as against unreported regression method) satisfying best-fit prediction. We conclude that, the higher the number of dependent services on a faulty service requiring a change, the higher the impact due to fault propagation.
Continuous monitoring of changes to utility services and products in a distributed information system is an interesting issue in software engineering. These changes affect the semantics and structural complexity of the system, as a change to one part will in most cases, result in changes to other parts. Therefore, in design and redesign for customization, predicting this change presents a significant challenge. Changes are intended to fix faults, improve or update products and services. Lack of validated, widely accepted, and adopted tools for planning, estimating, and performing maintenance contributes to the problem. One effective way of assessing changeability effect is to assess the impact of changes through a well validated model and framework. This research paper is an extended report on the implementation of a change propagation framework, together with it’s associated change impact analysis factor adaptation model, and a fault and failure assumption model to predict the effect of a change of a service in a grid environment. While implementing the framework, data was collected for three hypothetical years, thus helping to predict the next two (2) years consecutively. Significant results corresponding to the impact analysis factor were obtained showing the viable practicality of the use of Bayesian statistics (as against unreported regression method) satisfying best-fit prediction. We conclude that, the higher the number of dependent services on a faulty service requiring a change, the higher the impact due to fault propagation.
Extended Implementation of Change Impact Analysis Model-Based Framework to Enhance Predicting the Effect of a Change of Service in a Grid Environment
doi:10.11648/j.ajsea.20130206.12
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
Obeten Obi Ekabua
Extended Implementation of Change Impact Analysis Model-Based Framework to Enhance Predicting the Effect of a Change of Service in a Grid Environment
2
6
140
140
2014-01-01
2014-01-01
10.11648/j.ajsea.20130206.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.12
© Science Publishing Group
Software Security Metric Development Framework (An Early Stage Approach)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.14
This paper does an extensive survey on software security metrics and put forth an effort to characterize design time software security. Misconceptions associated to security metrics have been identified and discussed. A list of characteristics good security metrics should posses is listed. In absence of any standard guideline or methodology to develop early stage security metrics, an effort has been made to provide a strong theoretical basis to develop such a framework. As a result, a Security Metrics Development Framework has been proposed in this paper. Our next effort will be to implement the proposed framework to develop security metrics in early stage of software development life cycle.
This paper does an extensive survey on software security metrics and put forth an effort to characterize design time software security. Misconceptions associated to security metrics have been identified and discussed. A list of characteristics good security metrics should posses is listed. In absence of any standard guideline or methodology to develop early stage security metrics, an effort has been made to provide a strong theoretical basis to develop such a framework. As a result, a Security Metrics Development Framework has been proposed in this paper. Our next effort will be to implement the proposed framework to develop security metrics in early stage of software development life cycle.
Software Security Metric Development Framework (An Early Stage Approach)
doi:10.11648/j.ajsea.20130206.14
American Journal of Software Engineering and Applications
2014-01-01
© Science Publishing Group
A. Agrawal
R. A. Khan
Software Security Metric Development Framework (An Early Stage Approach)
2
6
155
155
2014-01-01
2014-01-01
10.11648/j.ajsea.20130206.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20130206.14
© Science Publishing Group
Do Agile Methods Increase Productivity and Quality
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140301.11
The Agile methods popped up in the history of software development methods as a solution to several frequent problems, but what is still not clear is whether they produce a significant improvement in productivity and quality or not, if they are compared to the traditional software development methods. In order to clarify this issue and contribute to a better understanding of these methods, we designed an empirical study in which Agile and traditional methods were compared in an academic context. By applying a traditional method to the development of software products, we managed to obtain a more reproducible result, though we could not obtain evidence of an improvement in quality. On the contrary, by applying an Agile method, we obtained evidence of higher productivity, but with a significant dispersion, an aspect that would be interesting to analyze in future studies
The Agile methods popped up in the history of software development methods as a solution to several frequent problems, but what is still not clear is whether they produce a significant improvement in productivity and quality or not, if they are compared to the traditional software development methods. In order to clarify this issue and contribute to a better understanding of these methods, we designed an empirical study in which Agile and traditional methods were compared in an academic context. By applying a traditional method to the development of software products, we managed to obtain a more reproducible result, though we could not obtain evidence of an improvement in quality. On the contrary, by applying an Agile method, we obtained evidence of higher productivity, but with a significant dispersion, an aspect that would be interesting to analyze in future studies
Do Agile Methods Increase Productivity and Quality
doi:10.11648/j.ajsea.20140301.11
American Journal of Software Engineering and Applications
2014-04-18
© Science Publishing Group
Gabriela Robiolo
Daniel Grane
Do Agile Methods Increase Productivity and Quality
3
1
11
11
2014-04-18
2014-04-18
10.11648/j.ajsea.20140301.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140301.11
© Science Publishing Group
Application Methods of Ant Colony Algorithm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140302.11
As one of the most prestigious and beneficial methods of artificial intelligence, ant colony takes the advantage of communal behavior of ants in nature for solving optimization problems in various fields. However, this useful algorithm requires extensive and repetitious computation, as a result, the processing duration of the present algorithm seems to be one of the most serious challenges about it. In order to solve optimization problems in which duration is very important, this paper attempts to review the previously applied methods and consider the advantages and the disadvantages of each method through highlighting the problems algorithm designers encounter.
As one of the most prestigious and beneficial methods of artificial intelligence, ant colony takes the advantage of communal behavior of ants in nature for solving optimization problems in various fields. However, this useful algorithm requires extensive and repetitious computation, as a result, the processing duration of the present algorithm seems to be one of the most serious challenges about it. In order to solve optimization problems in which duration is very important, this paper attempts to review the previously applied methods and consider the advantages and the disadvantages of each method through highlighting the problems algorithm designers encounter.
Application Methods of Ant Colony Algorithm
doi:10.11648/j.ajsea.20140302.11
American Journal of Software Engineering and Applications
2014-06-23
© Science Publishing Group
Elnaz Shafigh Fard
Khalil Monfaredi
Mohammad H. Nadimi
Application Methods of Ant Colony Algorithm
3
2
20
20
2014-06-23
2014-06-23
10.11648/j.ajsea.20140302.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140302.11
© Science Publishing Group
Software Reuse Facilitated by the Underlying Requirement Specification Document: A Knowledge-Based Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140303.11
Reinventing the wheel may not be appropriate in all instances of software development, and so, rather than do this, reuse of software artifacts should be embraced. Reuse offers certain benefits which include reduction in the overall development costs, increased reliability, standards compliance, accelerated development and reduced process risk. However, reusable software artifacts may not be considered useful if they cannot be accessed and understood. In this work, a knowledge based system was designed to capture requirements specification documents as abstract artifacts to be reused. Both explicit and tacit knowledge identification and acquisition- an important step in knowledge base development, was carried out through extraction from customer requirement documents, interviews with domain experts and personal observations. Protege4.1 was used as a tool for developing the Ontology. Web Ontology Language (OWL) was the search mechanism used to search the classified ontology to deduce reusable requirement components based on the underlying production rules for querying and retrieval of artifacts. Knowledge was formalized and result testing was carried out using software requirement specification documents from different domains. Result shows that only requirements with similar object properties called system purpose could really reuse such artifacts. The possibility of accessing more reusable artifacts lies in the update of the repository with more requirement specification documents. Scopes and purposes of previously developed software that would suit a proposed system in the same (or similar) domain would be found and consequently support the reuse of any of the end-products of such previously developed software.
Reinventing the wheel may not be appropriate in all instances of software development, and so, rather than do this, reuse of software artifacts should be embraced. Reuse offers certain benefits which include reduction in the overall development costs, increased reliability, standards compliance, accelerated development and reduced process risk. However, reusable software artifacts may not be considered useful if they cannot be accessed and understood. In this work, a knowledge based system was designed to capture requirements specification documents as abstract artifacts to be reused. Both explicit and tacit knowledge identification and acquisition- an important step in knowledge base development, was carried out through extraction from customer requirement documents, interviews with domain experts and personal observations. Protege4.1 was used as a tool for developing the Ontology. Web Ontology Language (OWL) was the search mechanism used to search the classified ontology to deduce reusable requirement components based on the underlying production rules for querying and retrieval of artifacts. Knowledge was formalized and result testing was carried out using software requirement specification documents from different domains. Result shows that only requirements with similar object properties called system purpose could really reuse such artifacts. The possibility of accessing more reusable artifacts lies in the update of the repository with more requirement specification documents. Scopes and purposes of previously developed software that would suit a proposed system in the same (or similar) domain would be found and consequently support the reuse of any of the end-products of such previously developed software.
Software Reuse Facilitated by the Underlying Requirement Specification Document: A Knowledge-Based Approach
doi:10.11648/j.ajsea.20140303.11
American Journal of Software Engineering and Applications
2014-07-10
© Science Publishing Group
Oladejo F. Bolanle
Ayetuoma O. Isaac
Software Reuse Facilitated by the Underlying Requirement Specification Document: A Knowledge-Based Approach
3
3
28
28
2014-07-10
2014-07-10
10.11648/j.ajsea.20140303.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140303.11
© Science Publishing Group
Models and Frameworks for a Successful Virtual Learning Environment (VLE) Implementation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140304.11
E-learning has become one of the major components in education processes, and it is one of the most important elements in which universities can attain a competitive advantage. Virtual learning environment (VLE), which is considered as a subpart of the LMS, allows educators and educational systems to go beyond place and ti.me in communication with every student. For this reason universities focus on having LMS, for it helps users access educational sources that is not only reliable, but also has the possibility to be integrated with other systems available at the university. The paper highlights and explores the different theories and methodologies related to implementing and switching virtual learning environment successfully. Many previous studies, framework, theories and models have been reviewed; those models and frameworks identify how successful the implementation of virtual learning environments is in higher educational institutes.
E-learning has become one of the major components in education processes, and it is one of the most important elements in which universities can attain a competitive advantage. Virtual learning environment (VLE), which is considered as a subpart of the LMS, allows educators and educational systems to go beyond place and ti.me in communication with every student. For this reason universities focus on having LMS, for it helps users access educational sources that is not only reliable, but also has the possibility to be integrated with other systems available at the university. The paper highlights and explores the different theories and methodologies related to implementing and switching virtual learning environment successfully. Many previous studies, framework, theories and models have been reviewed; those models and frameworks identify how successful the implementation of virtual learning environments is in higher educational institutes.
Models and Frameworks for a Successful Virtual Learning Environment (VLE) Implementation
doi:10.11648/j.ajsea.20140304.11
American Journal of Software Engineering and Applications
2014-09-05
© Science Publishing Group
Ayman Ahmed AlQudah
Models and Frameworks for a Successful Virtual Learning Environment (VLE) Implementation
3
4
45
45
2014-09-05
2014-09-05
10.11648/j.ajsea.20140304.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140304.11
© Science Publishing Group
Contributions to the Adoption of a Service-Oriented Architecture in an Autarchy
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140304.12
The implementation of e-Government in public administration allows the development of more service channels with its suppliers and relationships with its citizens, promoting a safe service and securing the confidentiality of information. Service-oriented architecture (SOA) is a new way of developing systems that promotes a shift from writing software to assembling and integrating services. By adopting an SOA approach and implementing it using supporting technologies, companies can build flexible systems that implement changing business processes quickly, and make extensive use of reusable components. In this paper we describe the approach we followed in adopting an SOA in an autarchy with regard to the implementation of a process of public procurement integrated with the existing systems. We present the steps followed, the difficulties and the advantages of this integration, pointing out the facilitator role of Web services in the design and implementation of the Service.
The implementation of e-Government in public administration allows the development of more service channels with its suppliers and relationships with its citizens, promoting a safe service and securing the confidentiality of information. Service-oriented architecture (SOA) is a new way of developing systems that promotes a shift from writing software to assembling and integrating services. By adopting an SOA approach and implementing it using supporting technologies, companies can build flexible systems that implement changing business processes quickly, and make extensive use of reusable components. In this paper we describe the approach we followed in adopting an SOA in an autarchy with regard to the implementation of a process of public procurement integrated with the existing systems. We present the steps followed, the difficulties and the advantages of this integration, pointing out the facilitator role of Web services in the design and implementation of the Service.
Contributions to the Adoption of a Service-Oriented Architecture in an Autarchy
doi:10.11648/j.ajsea.20140304.12
American Journal of Software Engineering and Applications
2014-09-27
© Science Publishing Group
Paul Andre da Fonseca Moreira Coelho
Rui Manuel da Silva Gomes
Contributions to the Adoption of a Service-Oriented Architecture in an Autarchy
3
4
55
55
2014-09-27
2014-09-27
10.11648/j.ajsea.20140304.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140304.12
© Science Publishing Group
Orlando Nursing Process Based Healthcare Information Management System
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140305.11
The processes and management of healthcare records are not trivial exercise; there is need for application of ICT to healthcare management system in order to meet globally accepted health care systems. Many healthcare systems have been designed and implemented, but they do not adequately incorporate nursing process. Also, most of them do not consider the needs and aspirations of patients. In this work, a modified framework based on Orlando’s nursing process focuses on improvement in the patient’s behavior by actions that are based on a patient’s needs found through effective interaction with the patient was designed and proto type implemented. The design was implemented using Visual Basic.Net and SQL because of their supports for implementing web-based systems. The system was hosted on a website for a period of two months. Real life data in respect of medical practitioners’ and patients were captured, analyzed and evaluated. Users interacted with the hosted system during the evaluation period. The system was evaluated using usability test and structured questionnaire. The result showed 93.10% participation efficiency, while ease of use, operational efficiency and data protection of Healthcare Information system scored more than 80%.This showed that the Healthcare Information Systems (HIS) is an effective life saving system that can influence and enhance health-workers’ quality of services, timely precision decision making process and reduced cost of health care significantly through effective healthcare management. It is applicable in any healthcare environment irrespective of their social economic and technology settings. The application of the framework will prevents the spread of deadly diseases like Ebola Virus.
The processes and management of healthcare records are not trivial exercise; there is need for application of ICT to healthcare management system in order to meet globally accepted health care systems. Many healthcare systems have been designed and implemented, but they do not adequately incorporate nursing process. Also, most of them do not consider the needs and aspirations of patients. In this work, a modified framework based on Orlando’s nursing process focuses on improvement in the patient’s behavior by actions that are based on a patient’s needs found through effective interaction with the patient was designed and proto type implemented. The design was implemented using Visual Basic.Net and SQL because of their supports for implementing web-based systems. The system was hosted on a website for a period of two months. Real life data in respect of medical practitioners’ and patients were captured, analyzed and evaluated. Users interacted with the hosted system during the evaluation period. The system was evaluated using usability test and structured questionnaire. The result showed 93.10% participation efficiency, while ease of use, operational efficiency and data protection of Healthcare Information system scored more than 80%.This showed that the Healthcare Information Systems (HIS) is an effective life saving system that can influence and enhance health-workers’ quality of services, timely precision decision making process and reduced cost of health care significantly through effective healthcare management. It is applicable in any healthcare environment irrespective of their social economic and technology settings. The application of the framework will prevents the spread of deadly diseases like Ebola Virus.
Orlando Nursing Process Based Healthcare Information Management System
doi:10.11648/j.ajsea.20140305.11
American Journal of Software Engineering and Applications
2014-10-15
© Science Publishing Group
Adegboye Adegboyega
Akpan Julius Aniefiok
Orlando Nursing Process Based Healthcare Information Management System
3
5
62
62
2014-10-15
2014-10-15
10.11648/j.ajsea.20140305.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140305.11
© Science Publishing Group
Model-Based Approach to Design web Application Testing Tool
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140305.12
Software engineering is a systematic approach defined as a science of industrial engineering that measures the practical methods and working process of the software engineers. This approach is based on analyzing, designing, assessment implementing testing and reengineering processes of given software All those phases are very important and have a specific role in SE’s cycle, especially software testing that acts as a significant element in this cycle and it represents also a fundamental key for software quality assurance. Software testing has as goal to test the software performance by measuring the gap between the expected behavior of the software under test and the test results. This comparison allows the tester to analyze errors and bugs in order to fix them and develop the software performance. As a critical factor in SQA, software testing is considered like a definitive review of the tool’s specification: it permits the tester to redesign the tool specification after the test in case of failure. This procedure is also applied on web applications, in similar ways to obtain the same goal: applications quality assurance, but the web applications are more complicated to be tested because of the interaction of the application with the rest of the distributed system. In fact, in more precisely terms, web application testing is a process that measures the functional and non functional proprieties of a given web application to analyze its performance in order to fix errors or even to reach a better level of the application under test. The demand on web applications or generally on software testing tools groups up with the increase in applications or software failures and cost.
Software engineering is a systematic approach defined as a science of industrial engineering that measures the practical methods and working process of the software engineers. This approach is based on analyzing, designing, assessment implementing testing and reengineering processes of given software All those phases are very important and have a specific role in SE’s cycle, especially software testing that acts as a significant element in this cycle and it represents also a fundamental key for software quality assurance. Software testing has as goal to test the software performance by measuring the gap between the expected behavior of the software under test and the test results. This comparison allows the tester to analyze errors and bugs in order to fix them and develop the software performance. As a critical factor in SQA, software testing is considered like a definitive review of the tool’s specification: it permits the tester to redesign the tool specification after the test in case of failure. This procedure is also applied on web applications, in similar ways to obtain the same goal: applications quality assurance, but the web applications are more complicated to be tested because of the interaction of the application with the rest of the distributed system. In fact, in more precisely terms, web application testing is a process that measures the functional and non functional proprieties of a given web application to analyze its performance in order to fix errors or even to reach a better level of the application under test. The demand on web applications or generally on software testing tools groups up with the increase in applications or software failures and cost.
Model-Based Approach to Design web Application Testing Tool
doi:10.11648/j.ajsea.20140305.12
American Journal of Software Engineering and Applications
2014-11-11
© Science Publishing Group
Dalila Souilem Boumiza
Amani Ben Azzouz
Salma Boumiza
Model-Based Approach to Design web Application Testing Tool
3
5
67
67
2014-11-11
2014-11-11
10.11648/j.ajsea.20140305.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140305.12
© Science Publishing Group
The Analysis of GCFS Algorithm in Medical Data Processing and Mining
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.11
Feature selection plays a significant part in medical data processing and mining, it can reduce the dimensionalities of datasets and enhance the performance of the classifiers, and it is also helpful to clinical decision support to a great extent. At present, the clinical decision support is more performed by physicians subjectively based on clinical knowledge, which may hinder the diagnosis and treatment. This paper mainly outlines the performance of GCFS (Genetic Correlation-based Feature Selection) algorithm in the processing and mining procedure of medical data, and medical UCI datasets are employed as the studied materials for proving the improvement of feature selection in data classification. Compared with the algorithms of CFS and GA (Genetic Algorithm), ensemble learning methods are employed as the testing classifiers, and the results show GCFS algorithm almost improves the performances of the testing classifiers better than CFS and GA.
Feature selection plays a significant part in medical data processing and mining, it can reduce the dimensionalities of datasets and enhance the performance of the classifiers, and it is also helpful to clinical decision support to a great extent. At present, the clinical decision support is more performed by physicians subjectively based on clinical knowledge, which may hinder the diagnosis and treatment. This paper mainly outlines the performance of GCFS (Genetic Correlation-based Feature Selection) algorithm in the processing and mining procedure of medical data, and medical UCI datasets are employed as the studied materials for proving the improvement of feature selection in data classification. Compared with the algorithms of CFS and GA (Genetic Algorithm), ensemble learning methods are employed as the testing classifiers, and the results show GCFS algorithm almost improves the performances of the testing classifiers better than CFS and GA.
The Analysis of GCFS Algorithm in Medical Data Processing and Mining
doi:10.11648/j.ajsea.20140306.11
American Journal of Software Engineering and Applications
2014-12-05
© Science Publishing Group
Xiao Yu Chen
Bo Liu
Zhe Feng Zhang
Xin Xia
The Analysis of GCFS Algorithm in Medical Data Processing and Mining
3
6
73
73
2014-12-05
2014-12-05
10.11648/j.ajsea.20140306.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.11
© Science Publishing Group
Survey of Software Components to Emulate OpenFlow Protocol as an SDN Implementation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.12
Software Defined Networks (SDN) is the next wave in networking evolution. It may be considered as a revolution rather than an evolution since; many concepts of conventional network protocols are reshaped. OpenFlow protocol is the most widely deployed protocol in SDN. Emulation of OpenFlow based network projects facilitates the implementation of new ideas and driving the development of the protocol. In this paper, a summary of many software components related to OpenFlow is presented. Most of these software components were tested by the researchers in order to simplify the choice for other researchers considering the implementation of OpenFlow projects. These tests showed that there are differences in performance for the controllers that support OpenFlow 1.0 and OpenFlow 1.3. Furthermore, the tested controllers differs in the applications they support.
Software Defined Networks (SDN) is the next wave in networking evolution. It may be considered as a revolution rather than an evolution since; many concepts of conventional network protocols are reshaped. OpenFlow protocol is the most widely deployed protocol in SDN. Emulation of OpenFlow based network projects facilitates the implementation of new ideas and driving the development of the protocol. In this paper, a summary of many software components related to OpenFlow is presented. Most of these software components were tested by the researchers in order to simplify the choice for other researchers considering the implementation of OpenFlow projects. These tests showed that there are differences in performance for the controllers that support OpenFlow 1.0 and OpenFlow 1.3. Furthermore, the tested controllers differs in the applications they support.
Survey of Software Components to Emulate OpenFlow Protocol as an SDN Implementation
doi:10.11648/j.ajsea.20140306.12
American Journal of Software Engineering and Applications
2014-08-19
© Science Publishing Group
Mohammed Basheer Al-Somaidai
Estabrak Bassam Yahya
Survey of Software Components to Emulate OpenFlow Protocol as an SDN Implementation
3
6
82
82
2014-08-19
2014-08-19
10.11648/j.ajsea.20140306.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.12
© Science Publishing Group
Open Source Software Selection Using an Analytical Hierarchy Process (AHP)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.13
There are many potential and good open source software (OSS) products available on the market with a free license. However, with various choices, the adoption rate is still low among potential users because there is not an agreed acceptable set of criteria to evaluate and select various OSS. The criteria of selection may differ between the stakeholders within the organisations. There is a tendency that the user may have a biased perception of an OSS’s characteristics or capabilities for solving problems when selecting OSS products. Other restrictions are caused by inadequate documentation and user manuals, and immature products. Therefore, the users need to consider how to improve their decision making when selecting the most suitable OSS products. In this paper, the background research on the proposed OSS adoption and criteria of selection are discussed and explored. Then the research methodology, processes and implementation of the My Open Source Software Toolkit (MyOSST) v1.0 are covered. The analytical hierarchy process (AHP) was applied on the selection process and for the purpose of assisting the potential user to decide on the OSS products based on their preferred selection criteria. MyOSST v1.0 was tested and validated by IT professionals in one of the Malaysian universities. The results show that the tool is capable of assisting the decision making process for selecting an appropriate OSS product.
There are many potential and good open source software (OSS) products available on the market with a free license. However, with various choices, the adoption rate is still low among potential users because there is not an agreed acceptable set of criteria to evaluate and select various OSS. The criteria of selection may differ between the stakeholders within the organisations. There is a tendency that the user may have a biased perception of an OSS’s characteristics or capabilities for solving problems when selecting OSS products. Other restrictions are caused by inadequate documentation and user manuals, and immature products. Therefore, the users need to consider how to improve their decision making when selecting the most suitable OSS products. In this paper, the background research on the proposed OSS adoption and criteria of selection are discussed and explored. Then the research methodology, processes and implementation of the My Open Source Software Toolkit (MyOSST) v1.0 are covered. The analytical hierarchy process (AHP) was applied on the selection process and for the purpose of assisting the potential user to decide on the OSS products based on their preferred selection criteria. MyOSST v1.0 was tested and validated by IT professionals in one of the Malaysian universities. The results show that the tool is capable of assisting the decision making process for selecting an appropriate OSS product.
Open Source Software Selection Using an Analytical Hierarchy Process (AHP)
doi:10.11648/j.ajsea.20140306.13
American Journal of Software Engineering and Applications
2014-12-19
© Science Publishing Group
Yusmadi Yah Jusoh
Khadijah Chamili
Noraini Che Pa
Jamaiah H. Yahaya
Open Source Software Selection Using an Analytical Hierarchy Process (AHP)
3
6
89
89
2014-12-19
2014-12-19
10.11648/j.ajsea.20140306.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.13
© Science Publishing Group
Design and Implementation of Image Search Algorithm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.14
Image search is becoming an urgent problem of the next generation of search engine. We firstly review the developed situation of image search engine in this paper. Then, the main difficulty and key technologies about this engine are analyzed. Next, the design method is elaborated in detail, which mainly includes image recognition, perceptual hash algorithm, system solution, image retrieval procedure as well as software module, and so on. As a result, we develop an image search engine according to above design methods and implement searching image on the Internet. The testing results finally prove the overall performance of our image search engine is excellent and achieves the desired design requirements. By using data filtering technology and perceptual hash algorithm, the search time-consumed is less than 1 second and is of high search efficiency.
Image search is becoming an urgent problem of the next generation of search engine. We firstly review the developed situation of image search engine in this paper. Then, the main difficulty and key technologies about this engine are analyzed. Next, the design method is elaborated in detail, which mainly includes image recognition, perceptual hash algorithm, system solution, image retrieval procedure as well as software module, and so on. As a result, we develop an image search engine according to above design methods and implement searching image on the Internet. The testing results finally prove the overall performance of our image search engine is excellent and achieves the desired design requirements. By using data filtering technology and perceptual hash algorithm, the search time-consumed is less than 1 second and is of high search efficiency.
Design and Implementation of Image Search Algorithm
doi:10.11648/j.ajsea.20140306.14
American Journal of Software Engineering and Applications
2014-12-26
© Science Publishing Group
Zhengxi Wei
Pan Zhao
Liren Zhang
Design and Implementation of Image Search Algorithm
3
6
94
94
2014-12-26
2014-12-26
10.11648/j.ajsea.20140306.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.14
© Science Publishing Group
An Empirical Study on the Effectiveness of Automated Test Case Generation Techniques
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.15
The advent of automated test case generation has helped to reduce the laborious task of generating test cases manually and is prominent in the software testing field of research and as a result, several techniques have been developed to aid the generation of test cases automatically. However, some major currently used automated test case generation techniques have not been empirically evaluated to ascertain their performances as many assumptions on technique performances are based on theoretical deductions. In this paper, we perform experiment on two major automated test case generation techniques (Concolic test case generation technique and the Combinatorial test case generation technique) and evaluate based on selected metrics (number of test cases generated, complexities of the selected programs, the percentage of test coverage and performance score). The results from the experiment show that the Combinatorial technique performed better than the Concolic technique. Hence, the Combinatorial test case generation technique was found to be more effective than the Concolic test case generation technique based on the selected metrics.
The advent of automated test case generation has helped to reduce the laborious task of generating test cases manually and is prominent in the software testing field of research and as a result, several techniques have been developed to aid the generation of test cases automatically. However, some major currently used automated test case generation techniques have not been empirically evaluated to ascertain their performances as many assumptions on technique performances are based on theoretical deductions. In this paper, we perform experiment on two major automated test case generation techniques (Concolic test case generation technique and the Combinatorial test case generation technique) and evaluate based on selected metrics (number of test cases generated, complexities of the selected programs, the percentage of test coverage and performance score). The results from the experiment show that the Combinatorial technique performed better than the Concolic technique. Hence, the Combinatorial test case generation technique was found to be more effective than the Concolic test case generation technique based on the selected metrics.
An Empirical Study on the Effectiveness of Automated Test Case Generation Techniques
doi:10.11648/j.ajsea.20140306.15
American Journal of Software Engineering and Applications
2014-12-26
© Science Publishing Group
Bolanle F. Oladejo
Dimple T. Ogunbiyi
An Empirical Study on the Effectiveness of Automated Test Case Generation Techniques
3
6
101
101
2014-12-26
2014-12-26
10.11648/j.ajsea.20140306.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.15
© Science Publishing Group
Music/Multimedia Technology: Melody Synthesis and Rhythm Creation Processes of the Hybridized Interactive Algorithmic Composition Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.17
Music composition, by machine, requires the solution of a number of difficult problems in the fields of algorithm design, data representation, human interface design, and software engineering in general. These aforementioned problems led to the emergence of the objectives of this research. Consequently, a concept formulation was derived from the existing algorithmic composition models – where their strengths were harnessed and their weaknesses transparently subdued. This brought about the hybridization of the existing models that gave birth to Hybridized Interactive Algorithmic Composition model that leverages on the speed and accuracy of the computer to complement human creativity in music improvisation and composition. This paper presents both the melody synthesis and rhythm creation processes of the Hybridized Interactive Algorithmic Composition Model.
Music composition, by machine, requires the solution of a number of difficult problems in the fields of algorithm design, data representation, human interface design, and software engineering in general. These aforementioned problems led to the emergence of the objectives of this research. Consequently, a concept formulation was derived from the existing algorithmic composition models – where their strengths were harnessed and their weaknesses transparently subdued. This brought about the hybridization of the existing models that gave birth to Hybridized Interactive Algorithmic Composition model that leverages on the speed and accuracy of the computer to complement human creativity in music improvisation and composition. This paper presents both the melody synthesis and rhythm creation processes of the Hybridized Interactive Algorithmic Composition Model.
Music/Multimedia Technology: Melody Synthesis and Rhythm Creation Processes of the Hybridized Interactive Algorithmic Composition Model
doi:10.11648/j.ajsea.20140306.17
American Journal of Software Engineering and Applications
2015-01-08
© Science Publishing Group
E. J. Garba
G. M. Wajiga
Music/Multimedia Technology: Melody Synthesis and Rhythm Creation Processes of the Hybridized Interactive Algorithmic Composition Model
3
6
111
111
2015-01-08
2015-01-08
10.11648/j.ajsea.20140306.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.17
© Science Publishing Group
PID DC Motor Drive with Gain Scheduling
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.16
In this work software-based PID controller with gain scheduling is implemented to drive a DC motor. LabVIEW PID controller tool with its associated gain scheduling VI was used. Motor start up interval was experimentally analyzed and divided into three regions with three related PID gains sets. Gain scheduling selection criterion was based on the dynamic error absolute value, and it was realized by using case structures. Experiments show that speed overshoot was eliminated and drive system response became faster. Generally it is possible to auto tune the PID controller to achieve a response with the required static or dynamic specifications.
In this work software-based PID controller with gain scheduling is implemented to drive a DC motor. LabVIEW PID controller tool with its associated gain scheduling VI was used. Motor start up interval was experimentally analyzed and divided into three regions with three related PID gains sets. Gain scheduling selection criterion was based on the dynamic error absolute value, and it was realized by using case structures. Experiments show that speed overshoot was eliminated and drive system response became faster. Generally it is possible to auto tune the PID controller to achieve a response with the required static or dynamic specifications.
PID DC Motor Drive with Gain Scheduling
doi:10.11648/j.ajsea.20140306.16
American Journal of Software Engineering and Applications
2015-01-08
© Science Publishing Group
Wasif Abdel Aziz Saluos
Mohammad Abdelkarim Alia
PID DC Motor Drive with Gain Scheduling
3
6
105
105
2015-01-08
2015-01-08
10.11648/j.ajsea.20140306.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20140306.16
© Science Publishing Group
Metrics for Quantification of the Software Testing Tools Effectiveness
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150401.12
An automated testing tool helps the testers to quantify the quality of software by testing the software automatically. To quantify the quality of software there is always a requirement of good testing tools, which satisfy the testing requirement of the project. Although there is a wide range of testing tools available in the market and they vary in approach, quality, usability and other characteristics. Selecting the appropriate testing tool for software there is a requirement of a methodology to prioritize them on the basis of some characteristics. We propose a set of metrics for measuring the characteristics of the automated testing tools for examination and selection of automated testing tools. A new extended model which is proposed provides the metrics to calculate the effectiveness of functional testing tools on the basis of operability. The industry will be benefited as they can use these metrics to evaluate functional tools and they can further make selection of tool for their software required to be tested and hence reduce the testing effort, saving time and gaining maximum monetary benefit.
An automated testing tool helps the testers to quantify the quality of software by testing the software automatically. To quantify the quality of software there is always a requirement of good testing tools, which satisfy the testing requirement of the project. Although there is a wide range of testing tools available in the market and they vary in approach, quality, usability and other characteristics. Selecting the appropriate testing tool for software there is a requirement of a methodology to prioritize them on the basis of some characteristics. We propose a set of metrics for measuring the characteristics of the automated testing tools for examination and selection of automated testing tools. A new extended model which is proposed provides the metrics to calculate the effectiveness of functional testing tools on the basis of operability. The industry will be benefited as they can use these metrics to evaluate functional tools and they can further make selection of tool for their software required to be tested and hence reduce the testing effort, saving time and gaining maximum monetary benefit.
Metrics for Quantification of the Software Testing Tools Effectiveness
doi:10.11648/j.ajsea.20150401.12
American Journal of Software Engineering and Applications
2015-04-15
© Science Publishing Group
Pawan Singh
Mulualem Wordofa Regassa
Metrics for Quantification of the Software Testing Tools Effectiveness
4
1
22
22
2015-04-15
2015-04-15
10.11648/j.ajsea.20150401.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150401.12
© Science Publishing Group
An MDA Method for Automatic Transformation of Models from CIM to PIM
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150401.11
The Model Driven Architecture (MDA) approach introduces a clear separation of the business logic from the implementation logic that's less stable. It uses the models that are more perennial than codes. It puts the models at the centre of the development of software and of the information systems. The MDA approach consists at, firstly, developing the CIM Model, secondly, obtaining the PIM model from the CIM, and finally generating the PSM model from the PIM which facilitates the generation of code for a chosen technical platform. In the literature, several works have summarized the MDA approach to the passage from PIM to PSM then from the PSM to code. Yet, very little work has contributed in the axis of the CIM to PIM transformation, and their approaches generally propose a CIM model which does not cover the different specifications of the Object Management Group (OMG) and/or the CIM to PIM transformation that they define is in the most cases manual or semi-automatic. Thus, our proposal aims at providing a solution to the problem of constructing CIM and its automatic transformation at the PIM using the QVT transformation rules. The approach proposes to represent CIM by two models: The business process model reflecting both the static and the behavioral views of the system, and the functional requirement model defined by the use case model reflecting the functional view of the system. The transformation of the CIM allows us to generate the PIM level represented by two models: The domain classes model which gives a structural view of the system at this level, and a model that describes the behavior of the system to each use case.
The Model Driven Architecture (MDA) approach introduces a clear separation of the business logic from the implementation logic that's less stable. It uses the models that are more perennial than codes. It puts the models at the centre of the development of software and of the information systems. The MDA approach consists at, firstly, developing the CIM Model, secondly, obtaining the PIM model from the CIM, and finally generating the PSM model from the PIM which facilitates the generation of code for a chosen technical platform. In the literature, several works have summarized the MDA approach to the passage from PIM to PSM then from the PSM to code. Yet, very little work has contributed in the axis of the CIM to PIM transformation, and their approaches generally propose a CIM model which does not cover the different specifications of the Object Management Group (OMG) and/or the CIM to PIM transformation that they define is in the most cases manual or semi-automatic. Thus, our proposal aims at providing a solution to the problem of constructing CIM and its automatic transformation at the PIM using the QVT transformation rules. The approach proposes to represent CIM by two models: The business process model reflecting both the static and the behavioral views of the system, and the functional requirement model defined by the use case model reflecting the functional view of the system. The transformation of the CIM allows us to generate the PIM level represented by two models: The domain classes model which gives a structural view of the system at this level, and a model that describes the behavior of the system to each use case.
An MDA Method for Automatic Transformation of Models from CIM to PIM
doi:10.11648/j.ajsea.20150401.11
American Journal of Software Engineering and Applications
2015-03-10
© Science Publishing Group
Abdelouahed Kriouile
Najiba Addamssiri
Taoufiq Gadi
An MDA Method for Automatic Transformation of Models from CIM to PIM
4
1
14
14
2015-03-10
2015-03-10
10.11648/j.ajsea.20150401.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150401.11
© Science Publishing Group
Student Database System for Higher Education: A Case Study at School of Public Health, University of Ghana
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150402.11
The success of any organization such as School of Public Health, University of Ghana hinges on its ability to acquire accurate and timely data about its operations, to manage this data effectively, and to use it to analyze and guide its activities. Integrated student database system offer users (Student, Registrar, HOD) with a unified view of data from multiple sources. To provide a single consistent result for every object represented in these data sources, data fusion is concerned with resolving data inconsistency present in the heterogeneous sources of data. The main objective of this project is to build a rigid and robust integrated student database system that will track and store records of students. This easy-to-use, integrated database application is geared towards reducing time spent on administrative tasks. The system is intended to accept process and generate report accurately and any user can access the system at any point in time provided internet facility is available. The system is also intended to provide better services to users, provide meaningful, consistent, and timely data and information and finally promotes efficiency by converting paper processes to electronic form. The system was developed using technologies such as PHP, HTML, CSS and MySQL. PHP, HTML and CSS are used to build the user interface and database was built using MySQL. The system is free of errors and very efficient and less time consuming due to the care taken to develop it. All the phases of software development cycle are employed and it is worthwhile to state that the system is very robust. Provision is made for future development in the system.
The success of any organization such as School of Public Health, University of Ghana hinges on its ability to acquire accurate and timely data about its operations, to manage this data effectively, and to use it to analyze and guide its activities. Integrated student database system offer users (Student, Registrar, HOD) with a unified view of data from multiple sources. To provide a single consistent result for every object represented in these data sources, data fusion is concerned with resolving data inconsistency present in the heterogeneous sources of data. The main objective of this project is to build a rigid and robust integrated student database system that will track and store records of students. This easy-to-use, integrated database application is geared towards reducing time spent on administrative tasks. The system is intended to accept process and generate report accurately and any user can access the system at any point in time provided internet facility is available. The system is also intended to provide better services to users, provide meaningful, consistent, and timely data and information and finally promotes efficiency by converting paper processes to electronic form. The system was developed using technologies such as PHP, HTML, CSS and MySQL. PHP, HTML and CSS are used to build the user interface and database was built using MySQL. The system is free of errors and very efficient and less time consuming due to the care taken to develop it. All the phases of software development cycle are employed and it is worthwhile to state that the system is very robust. Provision is made for future development in the system.
Student Database System for Higher Education: A Case Study at School of Public Health, University of Ghana
doi:10.11648/j.ajsea.20150402.11
American Journal of Software Engineering and Applications
2015-04-22
© Science Publishing Group
Wisdom Kwami Takramah
Wisdom Kwasi Atiwoto
Student Database System for Higher Education: A Case Study at School of Public Health, University of Ghana
4
2
34
34
2015-04-22
2015-04-22
10.11648/j.ajsea.20150402.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150402.11
© Science Publishing Group
Comparisons Between MongoDB and MS-SQL Databases on the TWC Website
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150402.12
Owing to the huge amount of data in websites to be analysed, web innovative services are required to support them with high scalability and availability. The main reason of using NoSQL databases is for considering the huge amount of data and expressing large-scale distributed computations using Map-Reduce techniques. To enhance the service quality of customers and solve the problems of the huge amount of data existing in the websites such as Facebook, Google, and Twitter, the relational database technology was gradually replaced with the NoSQL database to improve the performance and expansion elasticity in recent years. In this paper, we compare both NoSQL MongoDB and MS-SQL databases, and discuss the effectiveness of the inquiry. In addition, relational database cluster systems often require larger server efficiency and capacity to be competent, but it incurs cost problems. On the other hand, using NoSQL database can easily expand the capacity without any extra costs. Through the experiments, it shows that NoSQL MongoDB is about ten times efficient for reading and writing than MS-SQL database. This verifies that the NoSQL database technology is quite a feasible option to be used in the future.
Owing to the huge amount of data in websites to be analysed, web innovative services are required to support them with high scalability and availability. The main reason of using NoSQL databases is for considering the huge amount of data and expressing large-scale distributed computations using Map-Reduce techniques. To enhance the service quality of customers and solve the problems of the huge amount of data existing in the websites such as Facebook, Google, and Twitter, the relational database technology was gradually replaced with the NoSQL database to improve the performance and expansion elasticity in recent years. In this paper, we compare both NoSQL MongoDB and MS-SQL databases, and discuss the effectiveness of the inquiry. In addition, relational database cluster systems often require larger server efficiency and capacity to be competent, but it incurs cost problems. On the other hand, using NoSQL database can easily expand the capacity without any extra costs. Through the experiments, it shows that NoSQL MongoDB is about ten times efficient for reading and writing than MS-SQL database. This verifies that the NoSQL database technology is quite a feasible option to be used in the future.
Comparisons Between MongoDB and MS-SQL Databases on the TWC Website
doi:10.11648/j.ajsea.20150402.12
American Journal of Software Engineering and Applications
2015-05-05
© Science Publishing Group
Chieh Ming Wu
Yin Fu Huang
John Lee
Comparisons Between MongoDB and MS-SQL Databases on the TWC Website
4
2
41
41
2015-05-05
2015-05-05
10.11648/j.ajsea.20150402.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150402.12
© Science Publishing Group
Dynamic Models for Multiplication and Division Offered by GeoGebra
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.s.2015040201.11
One of the most important features of GeoGebra is the coordination of the geometric and algebraic representations, easily observed in GeoGebra window. Using GeoGebra software the teacher can geometrically and fruitfully teach the concepts and algorithms of arithmetic operations in the elementary school. Our paper focuses on two important operations: multiplication and division in the set of natural numbers. Using GeoGebra features we visually demonstrate the concepts of these two operations and help the students to develop the process of mastering multiplication and division facts. Our paper aims to achieve three objectives: Firstly, teach multiplication and division using an area model with base and height of 10 squares. The table designed for this special purpose can be considered as a platform where the arrangements of objects, pictures or numbers in columns and rows is done. Secondly, teach division by using the concept of sharing or partitioning. We have designed a particular dynamic model allowing the teacher to convey the meaning of division so that the students can have a better understanding of the division process. Thirdly, by creating dynamic models for teachers and students we want to: 1. Increase teacher pedagogical content knowledge and improve the instructional practice; 2. Promote student learning by improving teaching practices and providing capacity-building solutions; 3. Encourage the teachers engage themselves in research activity and innovative educational practices and teaching strategies
One of the most important features of GeoGebra is the coordination of the geometric and algebraic representations, easily observed in GeoGebra window. Using GeoGebra software the teacher can geometrically and fruitfully teach the concepts and algorithms of arithmetic operations in the elementary school. Our paper focuses on two important operations: multiplication and division in the set of natural numbers. Using GeoGebra features we visually demonstrate the concepts of these two operations and help the students to develop the process of mastering multiplication and division facts. Our paper aims to achieve three objectives: Firstly, teach multiplication and division using an area model with base and height of 10 squares. The table designed for this special purpose can be considered as a platform where the arrangements of objects, pictures or numbers in columns and rows is done. Secondly, teach division by using the concept of sharing or partitioning. We have designed a particular dynamic model allowing the teacher to convey the meaning of division so that the students can have a better understanding of the division process. Thirdly, by creating dynamic models for teachers and students we want to: 1. Increase teacher pedagogical content knowledge and improve the instructional practice; 2. Promote student learning by improving teaching practices and providing capacity-building solutions; 3. Encourage the teachers engage themselves in research activity and innovative educational practices and teaching strategies
Dynamic Models for Multiplication and Division Offered by GeoGebra
doi:10.11648/j.ajsea.s.2015040201.11
American Journal of Software Engineering and Applications
2014-10-17
© Science Publishing Group
Lindita Kllogjeri
Pellumb Kllogjeri
Dynamic Models for Multiplication and Division Offered by GeoGebra
4
2
6
6
2014-10-17
2014-10-17
10.11648/j.ajsea.s.2015040201.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.s.2015040201.11
© Science Publishing Group
Computer Programs - New Considerations in Teaching and Learning Mathematics Science
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.s.2015040201.12
There are many applications of computer informatics like in computations, plotting graphics to use them in math papers and to study the properties of the functions, in solving and discussing problems of mathematics or physics, in economics, in social topics and so on. This paper presents the topic of how much the computer programs (while we are studying something or making some trials by manipulating) help the teacher in finding answer for different mathematical problems or for the formulation of mathematical statements or facts (in other fields of science, as well). We are presenting here several examples in order that teachers and students have them into consideration while using computer programs to teach and learn. It is important that the teachers and the students try themselves again these examples and others by manipulating with computer programs, making trials and keeping notes in order to find out that there are limitations in the computer programs. The computer program used is Geogebra.
There are many applications of computer informatics like in computations, plotting graphics to use them in math papers and to study the properties of the functions, in solving and discussing problems of mathematics or physics, in economics, in social topics and so on. This paper presents the topic of how much the computer programs (while we are studying something or making some trials by manipulating) help the teacher in finding answer for different mathematical problems or for the formulation of mathematical statements or facts (in other fields of science, as well). We are presenting here several examples in order that teachers and students have them into consideration while using computer programs to teach and learn. It is important that the teachers and the students try themselves again these examples and others by manipulating with computer programs, making trials and keeping notes in order to find out that there are limitations in the computer programs. The computer program used is Geogebra.
Computer Programs - New Considerations in Teaching and Learning Mathematics Science
doi:10.11648/j.ajsea.s.2015040201.12
American Journal of Software Engineering and Applications
2015-02-14
© Science Publishing Group
Qamil Kllogjeri
Pellumb Kllogjeri
Computer Programs - New Considerations in Teaching and Learning Mathematics Science
4
2
13
13
2015-02-14
2015-02-14
10.11648/j.ajsea.s.2015040201.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.s.2015040201.12
© Science Publishing Group
Towards a Framework for Enabling Operations of Livestock Information Systems in Poor Connectivity Areas
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.11
Livestock farming is one of the major agricultural activities in the country that is contributing towards achieving development goals of the national Growth and Reduction of Poverty (NSGRP). Smallholder livestock keepers depend on the information from the livestock field officers for sound decision making. Mobile application based solutions, which are currently widely proposed to facilitate the process, fail to perform in poor connectivity areas. This study proposes a machine learning based framework which will enhance the performance of mobile application based solutions in poor connectivity areas. The study used primary data, and secondary data. The primary data were collected through surveys, questionnaires, interviews, and direct observations. Secondary data were collected through books, articles, journals, and Internet searching. Open Data Kit (ODK) tool was used to collect responses from the respondents, and their geographical positions. We used Google earth to have smallholder livestock keepers’ distribution map. Results show that smallholder livestock keepers are geographically scattered and depend on the field livestock officers for exchange of information. Their means of communication are mainly face to face, and mobile phones. They do not use any Livestock Information System. The proposed framework will enable operations of Livestock Information System in poor connectivity area, where majority of smallholder livestock keepers live. This paper provides the requirements model necessary for designing and development of the machine learning-based application framework for enhancing performance of livestock mobile application systems, which will enable operations of livestock information systems in poor connectivity areas.
Livestock farming is one of the major agricultural activities in the country that is contributing towards achieving development goals of the national Growth and Reduction of Poverty (NSGRP). Smallholder livestock keepers depend on the information from the livestock field officers for sound decision making. Mobile application based solutions, which are currently widely proposed to facilitate the process, fail to perform in poor connectivity areas. This study proposes a machine learning based framework which will enhance the performance of mobile application based solutions in poor connectivity areas. The study used primary data, and secondary data. The primary data were collected through surveys, questionnaires, interviews, and direct observations. Secondary data were collected through books, articles, journals, and Internet searching. Open Data Kit (ODK) tool was used to collect responses from the respondents, and their geographical positions. We used Google earth to have smallholder livestock keepers’ distribution map. Results show that smallholder livestock keepers are geographically scattered and depend on the field livestock officers for exchange of information. Their means of communication are mainly face to face, and mobile phones. They do not use any Livestock Information System. The proposed framework will enable operations of Livestock Information System in poor connectivity area, where majority of smallholder livestock keepers live. This paper provides the requirements model necessary for designing and development of the machine learning-based application framework for enhancing performance of livestock mobile application systems, which will enable operations of livestock information systems in poor connectivity areas.
Towards a Framework for Enabling Operations of Livestock Information Systems in Poor Connectivity Areas
doi:10.11648/j.ajsea.20150403.11
American Journal of Software Engineering and Applications
2015-05-08
© Science Publishing Group
Herbert Peter Wanga
Khamisi Kalegele
Towards a Framework for Enabling Operations of Livestock Information Systems in Poor Connectivity Areas
4
3
49
49
2015-05-08
2015-05-08
10.11648/j.ajsea.20150403.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.11
© Science Publishing Group
Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.12
Data missing usually happens in the process of data collection, transmission, processing, preservation and application due to various reasons. In the research of face recognition, the missing of image pixel value will affect feature extraction. How to extract local feature from the incomplete data is an interesting as well as important problem. Nonnegative matrix factorization (NMF) is a low rank factorization method for matrix and has been successfully used in local feature extraction in various disciplines which face recognition is included. This paper mainly deals with this problem. Firstly, we classify the patterns of image pixel value missing, secondly, we provide the local feature extraction models basing on nonnegative matrix factorization under different types of missing data, thirdly, we compare the local feature extraction capabilities of the above given models under different missing ratio of the original data. Recognition rate is investigated under different data missing pattern. Numerical experiments are presented and conclusions are drawn at the end of the paper.
Data missing usually happens in the process of data collection, transmission, processing, preservation and application due to various reasons. In the research of face recognition, the missing of image pixel value will affect feature extraction. How to extract local feature from the incomplete data is an interesting as well as important problem. Nonnegative matrix factorization (NMF) is a low rank factorization method for matrix and has been successfully used in local feature extraction in various disciplines which face recognition is included. This paper mainly deals with this problem. Firstly, we classify the patterns of image pixel value missing, secondly, we provide the local feature extraction models basing on nonnegative matrix factorization under different types of missing data, thirdly, we compare the local feature extraction capabilities of the above given models under different missing ratio of the original data. Recognition rate is investigated under different data missing pattern. Numerical experiments are presented and conclusions are drawn at the end of the paper.
Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization
doi:10.11648/j.ajsea.20150403.12
American Journal of Software Engineering and Applications
2015-05-13
© Science Publishing Group
Yang Hongli
Hu Yunhong
Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization
4
3
55
55
2015-05-13
2015-05-13
10.11648/j.ajsea.20150403.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.12
© Science Publishing Group
Designing a Machine Learning – Based Framework for Enhancing Performance of Livestock Mobile Application System
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.13
Smallholder livestock keepers live in rural areas where there is poor Internet connectivity. Many mobile based system designed do not function well in such areas. To address these concerns, an Android Mobile Application will be designed and installed on a smartphone. The application will have an easy to use Graphical User Interface (GUI) and request resources from the server through the Internet. This Intelligent Livestock Information System (ILIS) will be able to provide and predict feedback to the livestock keepers. This solution will also collect livestock data from livestock keepers through mobile phones. The data will then be sent to the database if connectivity is available or through synchronization if connectivity is poor. Livestock experts will be able to view data and respond to any query from livestock keepers. The system will also be able to learn and predict the responses using machine learning techniques. The goal of the ILIS is to provide livestock services to anyone at anytime, overcoming the constraints of place, time and character. Overall, this is a novel idea in the field of mobile livestock information systems. Along these, this paper presents the software, hardware and architecture design of the machine learning based livestock information system. Overall this solution embodies an artificial intelligence approach which combines hardware and software technologies. The design will leverage the Android ADK operating system and Android mobile devices or tablets. Our main contribution here is the intelligent livestock Information System, which is a novel idea in the field of mobile livestock information systems.
Smallholder livestock keepers live in rural areas where there is poor Internet connectivity. Many mobile based system designed do not function well in such areas. To address these concerns, an Android Mobile Application will be designed and installed on a smartphone. The application will have an easy to use Graphical User Interface (GUI) and request resources from the server through the Internet. This Intelligent Livestock Information System (ILIS) will be able to provide and predict feedback to the livestock keepers. This solution will also collect livestock data from livestock keepers through mobile phones. The data will then be sent to the database if connectivity is available or through synchronization if connectivity is poor. Livestock experts will be able to view data and respond to any query from livestock keepers. The system will also be able to learn and predict the responses using machine learning techniques. The goal of the ILIS is to provide livestock services to anyone at anytime, overcoming the constraints of place, time and character. Overall, this is a novel idea in the field of mobile livestock information systems. Along these, this paper presents the software, hardware and architecture design of the machine learning based livestock information system. Overall this solution embodies an artificial intelligence approach which combines hardware and software technologies. The design will leverage the Android ADK operating system and Android mobile devices or tablets. Our main contribution here is the intelligent livestock Information System, which is a novel idea in the field of mobile livestock information systems.
Designing a Machine Learning – Based Framework for Enhancing Performance of Livestock Mobile Application System
doi:10.11648/j.ajsea.20150403.13
American Journal of Software Engineering and Applications
2015-05-27
© Science Publishing Group
Herbert Peter Wanga
Nasir Ghani
Khamisi Kalegele
Designing a Machine Learning – Based Framework for Enhancing Performance of Livestock Mobile Application System
4
3
64
64
2015-05-27
2015-05-27
10.11648/j.ajsea.20150403.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150403.13
© Science Publishing Group
Cyber-Physical Systems: A Framework for Prediction of Error in Smart Medical Devices
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150404.12
The objective of medical care services is designed to bring improvement to the health of patients. This is pursued with great vigor today with the use of modern health care systems which include medical sensors and automatically controlled actuation to deliver smart and proactive health services. The embedded devices control Smart Medical Devices (SMDs) used by physicians, Nurses, and Medical Staff which continuously interact with the human body or patient in one form or another. Cyber-Physical Systems (CPS) are integrations of computation with physical processes which are monitored and controlled by the embedded systems. CPS has positively affected a number of application areas which include communication, consumer energy, infrastructure, healthcare, manufacturing, military, robotics and transportation. The inappropriate use of these SMDs generate errors which are under-emphasized by stakeholders. Most users are only interested on the benefits derived in the use of SMDs and care-less on the danger that these devices can contribute to patients when used inappropriately. The error tendencies, possible factors and way forward is the subject matter of this paper. In order to achieve the stated objective, Input data was provided through a critical incident analysis of online database which provide readings from medical experts. These readings were compared to the standard world benchmarks and best practices. The difference between the readings and the standard benchmark were used to validate the existence of errors. A framework was developed for error prediction to improve safety in the use of SMDs. Due to the complexity of the problem, an algorithm was further developed to obtain an optimal solution of P1 to P5 within an acceptable threshold runtime which shows the gravity of these challenges on patients.
The objective of medical care services is designed to bring improvement to the health of patients. This is pursued with great vigor today with the use of modern health care systems which include medical sensors and automatically controlled actuation to deliver smart and proactive health services. The embedded devices control Smart Medical Devices (SMDs) used by physicians, Nurses, and Medical Staff which continuously interact with the human body or patient in one form or another. Cyber-Physical Systems (CPS) are integrations of computation with physical processes which are monitored and controlled by the embedded systems. CPS has positively affected a number of application areas which include communication, consumer energy, infrastructure, healthcare, manufacturing, military, robotics and transportation. The inappropriate use of these SMDs generate errors which are under-emphasized by stakeholders. Most users are only interested on the benefits derived in the use of SMDs and care-less on the danger that these devices can contribute to patients when used inappropriately. The error tendencies, possible factors and way forward is the subject matter of this paper. In order to achieve the stated objective, Input data was provided through a critical incident analysis of online database which provide readings from medical experts. These readings were compared to the standard world benchmarks and best practices. The difference between the readings and the standard benchmark were used to validate the existence of errors. A framework was developed for error prediction to improve safety in the use of SMDs. Due to the complexity of the problem, an algorithm was further developed to obtain an optimal solution of P1 to P5 within an acceptable threshold runtime which shows the gravity of these challenges on patients.
Cyber-Physical Systems: A Framework for Prediction of Error in Smart Medical Devices
doi:10.11648/j.ajsea.20150404.12
American Journal of Software Engineering and Applications
2015-08-19
© Science Publishing Group
Sunday Anuoluwa Idowu
Olawale Jacob Omotosho
Olusegun Ayodeji Ojesanmi
Stephen Olusola Maitanmi
Cyber-Physical Systems: A Framework for Prediction of Error in Smart Medical Devices
4
4
79
79
2015-08-19
2015-08-19
10.11648/j.ajsea.20150404.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150404.12
© Science Publishing Group
Matrix Decomposition for Recommendation System
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150404.11
Matrix decomposition, when the rating matrix has missing values, is recognized as an outstanding technique for recommendation system. In order to approximate user-item rating matrix, we construct loss function and append regularization constraint to prevent overfitting. Thus, the solution of matrix decomposition becomes an optimization problem. Alternating least squares (ALS) and stochastic gradient descent (SGD) are two popular approaches to solve optimize problems. Alternating least squares with weighted regularization (ALS-WR) is a good parallel algorithm, which can perform independently on user-factor matrix or item-factor matrix. Based on the idea of ALS-WR algorithm, we propose a modified SGD algorithm. With experiments on testing dataset, our algorithm outperforms ALS-WR. In addition, matrix decompositions based on our optimization method have lower RMSE values than some classic collaborate filtering algorithms.
Matrix decomposition, when the rating matrix has missing values, is recognized as an outstanding technique for recommendation system. In order to approximate user-item rating matrix, we construct loss function and append regularization constraint to prevent overfitting. Thus, the solution of matrix decomposition becomes an optimization problem. Alternating least squares (ALS) and stochastic gradient descent (SGD) are two popular approaches to solve optimize problems. Alternating least squares with weighted regularization (ALS-WR) is a good parallel algorithm, which can perform independently on user-factor matrix or item-factor matrix. Based on the idea of ALS-WR algorithm, we propose a modified SGD algorithm. With experiments on testing dataset, our algorithm outperforms ALS-WR. In addition, matrix decompositions based on our optimization method have lower RMSE values than some classic collaborate filtering algorithms.
Matrix Decomposition for Recommendation System
doi:10.11648/j.ajsea.20150404.11
American Journal of Software Engineering and Applications
2015-07-05
© Science Publishing Group
Jie Zhu
Yiming Wei
Binbin Fu
Matrix Decomposition for Recommendation System
4
4
70
70
2015-07-05
2015-07-05
10.11648/j.ajsea.20150404.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150404.11
© Science Publishing Group
Implementation of Egypt Sat-1 Satellite Test Center Using LabVIEW
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150405.11
In each stage of any satellite design cycle, it is required to have test system that verifies the operational functions of each satellite subsystem and the integration operation among the satellite subsystems. Usually, these test systems consist of many hardware’s and these hardware’s are very complicated and occupied large space. The first Egyptian satellite, Egypt Sat-1, has a test center; a place where the satellite integration test sequences are carried out. This center consists of complicated hardware. In this paper a new trend, using LabView tool with National Instrument (NI) chassis, is used to build a satellite test center (STC) prototype that reduce the cost, complexity and occupied area of the STC. So the new trend tests the ability to replace the Egypt Sat-1 test center. The results of this paper shows that the quality of the new trend compared to the existed Egypt Sat-1 test center.
In each stage of any satellite design cycle, it is required to have test system that verifies the operational functions of each satellite subsystem and the integration operation among the satellite subsystems. Usually, these test systems consist of many hardware’s and these hardware’s are very complicated and occupied large space. The first Egyptian satellite, Egypt Sat-1, has a test center; a place where the satellite integration test sequences are carried out. This center consists of complicated hardware. In this paper a new trend, using LabView tool with National Instrument (NI) chassis, is used to build a satellite test center (STC) prototype that reduce the cost, complexity and occupied area of the STC. So the new trend tests the ability to replace the Egypt Sat-1 test center. The results of this paper shows that the quality of the new trend compared to the existed Egypt Sat-1 test center.
Implementation of Egypt Sat-1 Satellite Test Center Using LabVIEW
doi:10.11648/j.ajsea.20150405.11
American Journal of Software Engineering and Applications
2015-08-31
© Science Publishing Group
Mohamed Elhady Keshk
Mohamed Ibrahim
Noran Tobar
Hend Nabil
Mohamed Elemam
Implementation of Egypt Sat-1 Satellite Test Center Using LabVIEW
4
5
85
85
2015-08-31
2015-08-31
10.11648/j.ajsea.20150405.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=137&doi=10.11648/j.ajsea.20150405.11
© Science Publishing Group