Institute of Technology
Permanent URI for this communityhttps://etd.hu.edu.et/handle/123456789/66
The Institute of Technology focuses on education, research, and innovation
in engineering, technology, and applied sciences to support sustainable development.
Browse
Item EVALUATION OF PROPOSED EMBANKMENT DAM FOR DODOTA IRRIGATION PROJECT(Hawassa University, 2018-08-10) DAWUD MANZA DOLLEMODesign and construction of embankment dam is increasing from to time in our country to help the utilization of water for multipurpose. Evaluation of propose embankment dam for Dodota Irrigation project as alternative design by introducing asphaltic concrete core or clay core vital form the stand point of safety, controlling seepage and very important structure in fault and Earthquake area. This study was aimed to evaluate a proposed embankment dam as alternative design and analysis for Dodota Irrigation Project. Address this objective proposing an embankment dam with an impermeable asphalt concrete core and analyzes it for seepage static and dynamic stability using Geo Studio 2012 numerical computer program. Based on computation the flux through the dam and foundation for asphaltic concrete case has been found to be 0.000059 m 3 /s and the flux through the dam and foundation for asphaltic concrete case has been found to be 0. 0.001334 m 3 /s. The factory of safety of the propose embankment dam for the alternative design of embankment at different construction stage and with different loading condition satisfied the minimum requirement of (USAC,2003). The stress deformation observed was much lower than the expected bearing capacity of the foundation rock. The static deformation analysis computed for the propose embankment dam shows the horizontal and vertical deformation that the dam may subject to were in the tolerable limit. The dynamic deformation also analysis computed for the propose embankment dam during the time of shaking the maximum vertical and horizontal deformations within allowable limit. Generally, application of asphaltic concrete core rick fill and rock fill clay core dam in the project can fulfill the basic requirement and minimum factor of safety under all loading condition. Over all analysis of the thesis aim is indicate the possibility of constructing dam for the Dodota Irrigation ProjectItem DECISION FRAMEWORK FOR THE USAGE OF CLOUD TECHNOLOGY IN ETHIOPIA HIGHER EDUCATION INSTITUTIONS(Hawassa Unversity, 2019-10-06) SELAM DESALGNEThe rapid technology advancements are always creating new opportunities and a new way of working. Cloud Technology is being popularizing across the world especially in academic institutions. It is not a new technology but rather a new delivery model for information and services using existing technologies. The paradigm has been recognized recently as key enabling efficient and effective technological services that will reshape the delivery and support of the educational services. This study is conducted on public Ethiopian Higher Educational Institutions to explore the critical determinants that influence the adoption of the Cloud Technology. Despite the fact that cloud computing offers great deal of opportunities, its adoption exacerbated with lack of standards and relative lack of general framework created dilemma for the institutions how to approach the cloud adoption. An exploratory study is carried out. This research work proposes TOETAD conceptual framework according to the Technology Organization Environment (TOE) model, Diffusion of Innovation (DOI) theory and Technology Acceptance Model (TAM) with added Decision Maker Context to the model. Adoption determinants for the technology will be examined through the lens of integrated model. The framework factors were identified by critically reviewing studies found in the literature together with factors from the industrial standards within the context of Ethiopia Higher Education Institutions. Data is collected by online questionnaire survey with IT managers, lectures, E-learning coordinators and Team Leaders from selected 17 Ethiopia Higher Educational Institutions with a total 103 respondent. On the other hand the proposed frame work is evaluated by an expert to validate the framework. The result also helps to encourage the Public Higher Educational Institutions in Ethiopia to understand the nature of the problem, increase their awareness about factors to be considered while adopting the cloud computingItem ETHIOPIAN COFFEE BEAN DETECTION AND CLASSIFICATION USING DEEP LEARNING(Hawassa University, 2020-06-02) GETABALEW AMTATEEthiopia is the homeland of Coffee Arabica. Coffee is the major commodity export which covers the highest income source of foreign currency. In addition to this, Coffee has a great role in social interaction between peoples and the source of income for the coffee-producing farmers. Ethiopian coffee beans are distinct from each other in terms of quality based on their geographical origins. Classification and grading of those coffee beans are based on growing origin, altitude, bean shape and color, preparation method and others. However, the quality of the coffee beans is determined by visual inspection, which is subjective, laborious, and prone to error and this requires the development of an alternative method which is precise, non destructive and objective. Thus, the objective of this research is to design and develop a model that characterizes and identifies coffee beans of six different origins of Ethiopia (Jimma, Limmu, Nekemte, Yirgacheffe, Bebeka, and Sidama). Coffee beans for this research are collected from the Ethiopian Coffee Quality Inspection and Auction Center (ECQIAC). Image processing and the state-of-the-art deep-learning techniques were employed to automatically classify coffee bean images into nine different class: washed Limmu, unwashed Limmu, washed Sidamo, unwashed Sidamo, washed Yirgacheffe, unwashed Yirgacheffe, unwashed Jimma, unwashed Nekemte, and washed Bebeka. A total of 9836 coffee bean images were used to train, validate and test the CNN model. We have compared the classification result of the model trained on different dataset sizes and hyperparameters. The model was trained on 80% of the dataset, validated on 10%, and tested on 10% of the colorful coffee bean images, with batch normalization has scored 99.89% overall classification accuracy and 0.92% generalization log loss. In conclusion, the result of the study shows that CNN is an effective deep learning technique in the classification of Ethiopian coffee beansItem FAILURE INVESTGATION OF EARTH FILL DAM, THE CASE OF ZANA DAM IN AMHARA RIGEN, ETHIOPIA(Hawassa University, 2020-07-03) DESTAW GASHAW ALEMUconstruction of dams serves a number of purposes such as water supply, irrigation, hydroelectric power generation, flood control and navigation etc.Zana dam is anEarth Fill Dam which islocated on Zana River used for irrigation purpose. The main problem of this dam is seepage and downstream slope failure.To address this problem evaluate the current dam conditionofdetermination of seepage and slope stability by Geo-studio software and identification of downstream slope failure and seepage through the body of the dam were assessed . Analysisofseepagewasdoneusingseepage and slope analysismethods which integrate Geo-studiosoftwareof Seep/w and slop/wtoolsatnormalandcurrentpoollevelcondition, the study was conducted mainly based on the laboratory investigation of materials used for construction. The result demonstrated that there wasa material property gap between what was stated in the design report and actually used in construction. The amount of seepage generated from the analysis was found to be1.406*10-7m3/s/mhowever from the actual constructed was found to be1.683 *10- 3 m3/s/m.Accordinglyfrom slope stability analysis the factor of safety of=1.335 and FS=1.193 for the designed and constructed sections respectivelywhich were less than 1.5.On the other handusing the newly proposed embankment section and the material property analyzed the seepage quantity through the embankment body found to be = 2.9218x10-7 m3/sec/m and the minimum factor of safety of = 2.127 and 2.285 with steady state condition upstream and downstream slop respectively, and factor of safety of =1.963 for using both horizontal and vertical seismic action. Similarly the major finding of the cause of failure is absence of proper filter and drainage materials. Result of gradation analysis of both base and shell materialsdemonstrated thatD15 (shell) =3 mm & D85 (core) =0.2mm this resulted 0.6mm>0.2mm which yielded that there was piping or internal erosion of the base material. Consequently the maximum and minimum bounds of filter materials obtained were: D60min=0.5mm, D15min=0.1 mm, D5min=0.075 mm, D100max=75 mm, D90max=20 mm, D60mm=2.5 mm & D15max=0.5mm obtained.Thereforecheminefilter material design and provision is mandatory for the safe life of the zoned type earth fill dam.Item COMPUTATIONAL MODEL FOR WOLAYITTA LANGUAGE SPELLING CHECKER(Hawassa Inversity, 2020-08-10) RAHEL SHUMESpelling checker systems are built, and researches are conducted worldwide to meet the needs of different languages. In Ethiopia, spelling checker researches were carried out for only Amharic language. These works have paved the way for researches on other languages like Wolayitta. A computational spelling checker is proposed and adopted for the Wolayitta language in this research. Word level spelling checker were built based on an edit distance algorithm and language model based open source tools and platforms. A dictionary database was constructed from 13,313 words from Wolayitta lexicon dictionary and 20,000 words from Wolayitta Bible. The database was then clustered into training and testing. The statistical model accept input from the user then check whether it is available in dictionary or not if it is available it will do nothing if not it gives suggestion. The testing was done in two phases where the first is error detection rate and the second evaluation is error correction. Tests were carried out on the input data from the user, and accuracy of 94.57% was achieved for spelling error detection, while accuracy of around 90.98 % was retrieved for the spelling suggestion model. The results were promising, and further researches can be entertained as per the recommendations made by the researcherItem IMPACTS OF LAND USE LAND COVER CHANGE ON RESERVOIR SEDIMENTATION (THE CASE OF RIBB DAM, IN LAKE TANA SUB-BASIN, ETHIOPIA)(Hawassa University, 2020-10-06) MEBRATU ESUBALEW ENGIDALand use land cover (LULC) change is the challenge and continuous drivers of environmental change. Understanding the rate and process of change is, therefore, basic for managing the water resources and the environment at large. This study was intended to analyze the LULC changes impacts on sediment load from 2000 to 2018 periods, and select critical (hot spot area) sub basins and recommend best management practice for Ribb watershed of Lake Tana sub basin, Ethiopia. Both climate and hydrometric (flow and sediment) data were collected and analyzed over the period 1990 to 2018. Two time satellite imageries of the Land sat product (2000 and 2018) were used for land use change detection. The hybrid classification technique for extracting thematic information from satellite images were employed by using ERDAS model for classification of LULC. The Soil and Water Assessment Tool (SWAT) model was calibrated and validated to estimate sediment load of the watershed during the period 1992 to 2001 and 2002 to 2007 respectively. To manage the sediment load best management practices (BMP) as a scenario (filter strip, grassed water way and contouring) were implemented on 2018 LU map. The land use change detection result indicate that cultivated land has expanded from 66.87% in 2000 to 75.53% in 2018. Between 2000 and 2018 periods, it was increased by 8.66 %. The rate of increment during 2000–2018 periods were 608.915 ha/year. Similarly, settlement area had also increased by 2.09% from 2000–2018 periods. Similarly, shrub land and bare land also decreased at a rate of 412.868 and 227.651 ha/year, respectively, between 2000 and 2018 periods. Also the water body decreased at a rate of 1.593 ha/year between 2000 and 2018. The SWAT model result depict that the model give reasonable fit of sediment flux with observation during calibration and validation as evaluated with ENS ( 0.63 ) , R2 ( 0.67) and percent bias (17%) during calibration and ENS ( 0.58) , R2 ( 0.71) and percent bias of (12%) during validation period. Moreover, the severity of soil loss rate was increased with the average of 26.89 ton/ha/year from 2000 to 2018 LULC, which indicates that the management practice, was weak within the watershed. The BMP scenarios depict that filter strip was significant amount of LULC conversions practice and soil loss rate had occurred in the watershed from 2000 to 2018 periods, and expected to continue in the future. Thus, appropriate conservation and management practice are very much crucial to safe guard the life of the reservoirItem DEVELOPING IMAGE-BASED ENSET PLANT DISEASE IDENTIFICATION USING CONVOLUTIONAL NEURAL NETWORK(Hawassa University, 2020-11-07) UMER NURI MOHAMMEDNowadays, decline in food plant productivity is a major problem causing food insecurity to which plant disease is one of the factors. Early identification and accurate diagnosis of the health status of food plants is hence critical to limit the spread of plant diseases and it should be in a technological manner rather than by the labor force. Traditional observation methods by farmers or domain experts is perhaps time-consuming, expensive and sometimes inaccurate. Based on the literature, the literature suggests that deep learning approaches are the most accurate models for the detection of plant disease. Convolutional Neural network (CNN) is one of the popular approaches that allows computational models that are composed of multiple processing layers to learn representations of image data with multiple levels of abstraction. These models have dramatically improved the state-of-the-art in visual object recognition and image classification that makes it a good way for enset plant disease classification problems. For this purpose, we used an appropriate CNN based model for identifying and classifying the three most critical diseases of enset plants: - enset bacterial wilt, enset Leaf spot, and Root mealybug diseases. Enset is one of a major source of food in the South, Central and Southwestern parts of Ethiopia. A total of 14,992 images are used for conducting experiments including augmented images with four different categories; three diseased and a healthy class obtained from the different agricultural sectors stationed at Hawassa and Worabe Ethiopia, these images are provided as input to the proposed model. Under the 10-fold cross-validation strategy, the experimental results show that the proposed model can effectively detect and classify four classes of enset plant diseases with the best classification accuracy of 99.53%, which is higher than compared to other classical deep learning models such as MobileNet and Inception v3 deep learning modelsItem MASTERS OF SCIENCE IN COMMUNICATION ENGINEERING AND NETWORKING(Hawassa Unversity, 2021) SHIMELIS ABATE BELAYCurrently, number of wireless technologies deployed to connect daily activities of human being in different ways and systems. Among those technology Wireless Local Area Network (WLAN) the one which plays crucial role. Hence, it is delivered by use of 2.4GHz and 5GHz frequencies. Due to the number of users increase the two main problem became challenges which are spectrum scarcity and throughput. To solve the challenge there are several types of research are done and are going to be done related to 60GHz Millimeter Wave Frequency for WALN. In this thesis, coverage and capacity performance comparison of 60 GHz channel capacity over Rican fading channels with 2.4 GHz and 5 GHz for WLAN service to give a better selection for WLAN users in the future. By using with bit error rate (BER) and SNR for small-scale (fast) fading with higher M-ary QAM modulation scheme. As a result, the Rician Channel fading for 60GHz with comparisons of 2.4 and 5 GHz WLAN frequency has high throughputs, which is 60GHz channel capacity is 13.5 times of 5GHz channels, and 54 times channels of 2.4GHz. Therefore, it is more advantageous for high throughput user demands than 2.4 and 5GHz frequencies used for IEEE 802.11’s Standards. 60 GHz distance coverage, relatively time less half coverage of than 2.4GHz frequency and less than by around 7-meter time less than 5GHz frequency coverage. Hence, the shorter the coverage of 60GHz give an advantage to best candidate for frequency re-use to solve spectrum scarcityItem QUERY EXPANSION FOR AFAAN OROMO INFORMATION RETRIEVAL USING AUTOMATIC THESAURUS(Hawassa University, 2021-03-05) SAMUEL MESFIN BAYURecently, the amount of textual information written in Afaan Oromo language is increasing dynamically. Likewise, the need to access the information also increases. But, it is difficult to retrieve and satisfy one`s own information need, because of the inability of the users to formulate a good query and the terminological variation or term mismatching among the world of readers and the world of authors. Hence, query expansion is an effective mechanism to reduce term mismatching problems and also to improve the retrieval performance of IR systems. The idea behind query expansion is to reformulate the user’s original query by adding related terms. In this study, an automatic Afaan Oromo thesaurus is constructed from manually collected documents. After the text preprocessing tasks are performed on the document corpus, the preprocessed words are vectorized in multidimensional space by using Word2Vec`s skip-gram model. In which, words that share similar context have similar vector representation. Then cosine similarity measure was applied to construct the thesaurus. A one-to-many association approach was employed to select expansion terms. Hence top five terms that have the highest similarity score with the entire query were selected from the thesaurus and added to the original query of the user for query expansion. Then the reformulated query was used to retrieve more relevant documents. Experimentations were performed to observe the quality of the constructed thesaurus and the effect of integrating query expansion into the Afaan Oromo IR system. The result shows that the constructed thesaurus generates related terms with average relatedness accuracy of 62.1%. On the other hand, the integration of query expansion registered performance improvement by 14.3 % recall, 2.9 % F-measure, and performance decrement of 5.5% for precisionItem DESIGN OF A MORPHOLOGICAL GENERATOR AND ANALYZER FOR SIDAAMU AFOO VERBS USING FINITE-STATE TRANSDUCER(Hawassa University, 2021-03-07) KITAW AYELE GESUMASidaamu Afoo is an official language of the newly formed sidaama national state. It is one of the under-resourced languages of Ethiopia. There are very small attempts of study in the morphological perspective of the language. The morphological study is a low-level activity that paves the way for the development of high-level NLP applications. This study was interested in the design and implementation of a morphological generator and analyzer for the language verbs. The finite state transducer is used to design the verb morphological generator and analyzer system. Lexicon formalism is used for designing method and foma programming language as implementation toolkit. Forty-one distinct linguistic rules were used to form those variate word forms. The experimental result shows that out of 74,934 total generated words, 94.8% of words are correctly generated and analyzed by the system. Only 5.2% of words generated are invalid and wrongly analyzed. The factors of the problem are optional replacement rules and compiler ignorance of the glottal consonant of the language. Strict demarcation boundary for optional replacement rules and alternative techniques for glottal sound management left for future work.Item GROUNDWATER POTENTIAL MAPPING USING GIS AND REMOTE SENSING: A CASE STUDY IN WEYIB SUB-BASIN, GENALE-DAWA RIVER BASIN, SOUTHEAST ETHIOPIA(Hawassa University, 2021-08-12) ABDULGEFAR MUHIDIN MOHAMMEDTo fulfill the demand of a rapidly growing population in drought-prone areas with high rate of urbanization, identification and management of groundwater resources are required. In the Weyib Sub-basin, a search for an alternative source of water has been always a major issue. The current practice of groundwater potential zone (GWPZ) identification is time-consuming and uneconomical. Therefore, it is required to apply effective techniques for proper evaluation of groundwater resources. This study applied integration of GIS-Remote Sensing (RS) and Analytical Hierarchy Process (AHP) for mapping the GWPZ of Weyib Sub-basin, Southeast Ethiopia. For this purpose the physiographic, geology and climatic factors influencing GWPZ of the study area were characterized. The thematic maps of geomorphological landforms, lineament density, geology, rainfall distribution, drainage density, elevation, slope, LU/LC and soil texture were prepared. System for automated geoscientific analysis (SAGA) GIS, PCI Geomatics, Rockworks 16, IDRISI Selva and Surfer 17.1, were employed for landform classification, lineament extraction, rose diagram preparation, pairwise comparison of the factors and identification of groundwater flow direction, respectively. The AHP technique of Multi-criteria decision analysis (MCDA) was employed to determine the relative weight and influences of the thematic layers. Geomorphologic landform, lineament density, geology, and rainfall distribution were found to be the dominant factors sharing the highest weightage of 67%. A weighting overlay approach of GIS was utilized to overlay the thematic maps. The resulting GWPZ of the study area indicates five zones representing very high, high, moderate, poor and very poor GWPZ. The areal extent of very high and high GWPZ is 41 km2 and 2032 km2, respectively. Moderate, poor and very poor GWPZ covers 2088 km2, 252 km2 and 0.142 km2 areas. The particular direction of groundwater flow is towards the NE and SE, coinciding with the direction of surface water flow. It was controlled by NW-SE striking geologic structures. The delineated GWPZ map is verified by using the existing water point’s inventory data. It indicates a good prediction accuracy of 84%. Thus, the identification of GWPZ by using GIS and RS through AHP is reliable for conducting similar studiesItem A URL-based Phishing Attack Detection and Data Protection Model(Hawassa University, 2021-09-10) Yonathan Bukure RachoInternet users are increasing rapidly in an uninterrupted way that is influencing the way of living. Every day billions of websites are accessed over the globe to facilitate different usage to people. This positive reinforcement is also resulting in internet abusing by hackers for their benefits. Most of the time internet abusing has experimented with over mobile phones or emails. The users are victimized by those abuse even without knowing that they are misused by hackers. Social engineering has become the tool for the hacker to manipulate users psychologically to reveal secret information. Phishing is a kind of social engineering attack with the potential to do harm to the individual or overall organization. Cybercriminal called Phisher comes up constantly in contact with individuals with creative ways to compromise the secret assets. Phishers uses the malicious URLs that are embedded over the webpage with severe threat and appears legitimate. When user clicks these links, redirects to malicious webpage where attackers ask some secrete information by misguiding user. Such kinds of attacks must be properly addressed. This thesis is focused on URL based phishing detection and data protection against such kind of attacks. Thus, the contribution of this thesis is divided into two phases that are: (1) URL based phishing attack detection, and (2) Protection of individual/organization assets. For the first phase, this thesis explored and implemented four machine learning algorithms like Decision tree, Random Forest, Naive Bayes, and Logistic Regression. Further performances of these algorithms are evaluated and compared against training and testing dataset. Based on performance result obtained, the best algorithm is recommended. For the second phase, thesis proposed a data protection model using a hybrid encryption method that combined AES and RSA algorithms. This model ensures the confidentiality of information assets as well as protect them against various kind of attacks. Overall proposed work is implemented in the Python programming language. The phishing detection phase concluded that Random forest outperforms and gave the highest accuracy of detection after important feature selection as compared to other algorithms. Results analysis conclude 96.89% and 99.06% detection accuracy over testing and training dataset respectively in Random forest. Similarly, the data protection phase encrypted and decrypted the data files very fast i.e., within few milliseconds and ensured the confidentiality of data in transitItem EXPLORING A BETTER FEATURE EXTRACTION METHOD FOR AMHARIC HATE SPEECH DETECTION(Hawassa University, 2021-10-08) YESUF MOHAMED YIMAMHate speech is a speech that causes people to be attacked, discriminated, and hated because of their personal and collective identities. When hate speech grows, it will cause death and displacement of peoples from their homes and properties. Social media has the ability of widely spreading hate speech. To solve this problem, various researchers have studied many ways to detect social media hate speeches that are spreading in international and local languages. Because the problem is so serious, it needs to be carefully studied and better addressed in a variety of solutions. The previous studies detect a speech as hate speech, based on the frequency (occurrence) of a word in a given dataset; this means it does not consider the role of each word in a given sentence. The main purpose of this study is to design a method that can generate hate speech features from a given text by identifying the role of a word in a given sentence, so that hate speech can easily be distinguished from other forms of speech in a better way. To do this, various researches related to this study have been studied and reviewed. This study created a new feature extraction method for Amharic hate speech detection. The created model needs a training and testing dataset, so that posts and comments, which are posted on 25 popular Facebook pages, have been collected to build the dataset. Whether a speech is hateful or not, should be determined by the law that prohibits hate speech. So that, using different filtration methods, datasets that contain religious, ethnic, and hate words are collected and given to law experts, to annotate it manually. The law experts labeled 2590 datasets into three classes; Religion-hate, Ethnic-hate, and Non-hate. After dataset preparation, a new feature extraction method, which can distinguish hate speech from other speech, is developed. The new feature extraction method and other feature extraction methods that are used in other related studies are implemented and computed with three machine learning classification algorithms: SVM, NB, and RF. The result in different evaluation metrics shows that the new feature extraction method performed better in all combinations of classification algorithms. By using 80% of 2590 labeled datasets as a training set and the rest as a test set, 96.2% average accuracy is achieved using the combination of SVM with the new feature extraction method.Item EVALUATION OF THE IMPACTS OF CLIMATE CHANGE ON SEDIMENT YIELD FROM THE KATAR WATERSHED, CENTRAL RIFT-VALLEY BASIN, ETHIOPIA(Hawassa University, 2021-12-10) GELILA SAMUELClimate change is one of the issues that, the world facing today including Ethiopia and it is anticipated that climate change will impact sediment yield in watersheds. The purpose of this study was to investigate the impacts of climate change on sediment yield from the Katar watershed in the Eastern Lake Ziway Basin, Ethiopia. Here, used the coordinated regional climate downscaling experiment (CORDEX)-Africa data outputs of Hadley Global Environment Model 2-Earth System (HadGEM2-ES) under representative concentration pathway (RCP) scenarios (RCP4.5). The analysis was performed in two future projection of 2030’s and 2060’s under the reference of baseline period of 1987-2017 with their RCP correction. After assessment of missing, quality and consistency of data; bias, the coefficient of variation and correlation were used to evaluate the systematic error of precipitation amount, the degree of precipitation variability and bias-corrected before serving as input to the impact analysis A Soil and Water Assessment Tool (SWAT) model was constructed to simulate the hydrological and the sedimentological responses to climate change. The model performance was calibrated and validated using the coefficient of determination (R2 ) and Nash–Sutcliffe efficiency (NSE). The results of the calibration and the validation of the sediment yield R2 and NSE were 0.65 and 0.61, and 0.66 and 0.65, respectively. Climate change output from this research shows that the watershed will get warmer in the future. Both minimum and maximum temperature of the catchment have an increasing trend by 1.04 0C for 2030’s and 2.04 0C for 2060’s for minimum temperature and 0.90 0C for 2030’s and 1.56 0C for 2060’s for maximum temperature. Also, average annual rainfall shows increase by 4.8% for 2030’s and 1.6 % for 2060’s. The results of downscaled precipitation and temperature increased in both future period under RCP4.5 scenario. These climate variable increments were expected to result in intensifications in the mean annual sediment yield of 41.1% and 8.9% for RCP4.5 by the 2030s and the 2060s, respectively. The average annual sediment yield were 398 ton/km2 and 307 ton/km2 for the 2030’s and 2060’s, respectively. From this study, the results show that the sediment yield of the watershed is likely to increase under climate change scenarios. This will help water resources managers make informed decisions regarding the planning, management, and mitigation of the river basins.Item Developing Koorete Part of Speech (POS) Tagger: an Empirical Evaluation of Neural Word Embedding and N-Gram Based Statistical Approaches(Hawassa University, 2021-12-12) Agegnehu AshenafiThe Koorete language is spoken by the Koore people in Amaro Kele Special Woreda and in four Kebeles of Burji Special Woreda, Southern regional state. Koorete is written with Latin alphabets (or called ‗Diizo Beyta’ in Koorete language). This means, Latin alphabet is adopted to the language by adding additional combinations of letters for peculiar sounds totaling to 31- consonants (‗Artaxita’ in Koorete), 5-vowels (‗Arxaxita’ in Koorete), and one more symbol. The syntax of Koorete sentence structure is “Subject (Zeere utaade) + Object (efaxe) + Verb (Hanta beyiisaxe)”. This study develops Koorete POS Tagger using the empirical evaluation of Neural Word Embedding and N-gram based statistical POS tagging approaches. Parts-of-speech (POS) tagging is the process of assigning part-of-speech labels/tags to each word from Koorete POS tagset. Neural word embedding are distributed representations of words into vectors applying Bi-LSTM RNN model. N-gram based statistical approach uses probability frequencies of sequence labeling of words from the KPT corpus. Words having similar meanings can be represented similarly, which enable deep learning methods. The behavior of having similar representation orients to the reduction of out-of-vocabulary impact. This means, binary vector |V| dimension reduction. In simple language, word embedding is a language modeling technique which maps words to vectors using Word2Vec package, and would be computed in RNN. This Word2Vec package converts words to arrays of real numbers, and concatenate the original corpus word categories to the generated vectors. Word2Vec has a capability of capturing context of a word (semantic and syntactic similarity) in a document in relation with other words. For the purpose of sequence labeling method and distributed representation, this study uses Bi LSTM RNN by achieving the state-of-the-art POS tagging accuracy and N-gram based statistics approaches in contrast to the more classic approaches. Bi-LSTM handles or adds letter case functions to keep the original letter case information of word. This study applies skip-gram algorithm to encode words into a limited vector space. Because skip-gram model is efficient method for learning high quality vector representations of words from large amounts of unstructured text data. So experiments were practiced on Bi-LSTM RNN model, and N-gram tagger statistical approach. For this, KPT corpus is used about size of 1718 sentences (33220 words), and then divided this corpus into 90% training data and 10% testing data. The experiment on Bi-LSTM RNN word embedding POS tagging approach did better than the N-gram statistical POS tagging approach with the accuracy of 98.53%. Hence, this study solves the problems of (1) no rich resource in NLP applications, (2) Koorete language not having its own KPT corpus and tagsets for NLP applications, (3) the state-of-the art tagging performance algorithms accuracy with other relative languages POS tagging model.Item DATA CENTER VIRTUALIZATION FRAMEWORK IN ETHIOPIAN HIGHER(Hawassa University, 2022-03-04) ABDALLAH HUSENEthiopian higher learning institutions(HLI) data center runs physical data center in which a one application for one server architecture is exercised. Such architecture leads to under utilization and wastage of resources. Using virtualization technology enables lowering the size of the infrastructure, thus resulting in a huge savings on energy and other resources including management time. Cost reduction , energy efficiency and the reduction of a company’s carbon footprint are some of the significant benefits of using virtualization but virtualization technology is a long way from Plug-and-Play to unlock those benefits effectively, an information technology (IT) expert requires the appropriate set of model to be followed. In most cases, organizations have enough resources to move to virtualization without requiring additional budget. Certain researches have been conducted on virtualization especially client virtualization, application virtualization and network virtualization. But, little attention was given for server virtualization. The objective of this research is to explore the current traditional infrastructure practice and to propose data center virtualization framework using design science research methodology (DSRM). The goal is to look for option(s) that brings a preferred solution for University data centers that increases service availability and utilization of hardware. In this work a virtualization framework is proposed, and evaluated using the same services that are currently used in the university. Experiments are carried out to check and compare the resource utilization of both the physical machines and Virtual machines(VMs). By using both the experiment and the analysis result, well-matched virtualization framework was developed. The developed frameworks was tested on three services. Before virtualization each of these services were running in separate physical machine. In our experiment, by consolidating these services using virtualization, the study showed that it is possible to provide the same service using only a single server. The resource utilization of that single server increased as follows: Central processing system (CPU) usage increased from 4% to 17%, physical memory usage increased from 15% of 16GB to 62% of 16GB, and capacity of hard disk space in use was 67.4% up from 32%. The Universities data center will use the proposed well-matched framework to enable them increase their levels of utilization of the serversItem CONTEXT-BASED SPELL CHECKER FOR SIDAAMU AFOO(Hawassa University, 2022-03-04) MEKONEN MOKE WAGARAA spell checker is one of the applications of natural language processing that is used to detect and correct spelling errors in written text. Spelling errors that occur in the written text can be non-word errors or real-word errors. A non-word error is a misspelled word that is not found in the language and has no meaning whereas a real-word error, that is, the word is a valid word in the language but it does not fit contextually in the sentence. We designed and implemented a spell checker for Sidaamu Afoo that can detect and correct both non-word and real-word errors. Sidaamu Afoo is one of the languages spoken in the Sidaama region in the south-central part of Ethiopia. It is an official working language and is used as a medium of instruction in primary schools of the Sidaama national regional state in Ethiopia. To address the issue of spelling errors in the Sidaamu Afoo text, a spell checker is required. In this study, the dictionary look-up approach with a hashing algorithm is used to detect non-word errors, and the character-based encoder-decoder model is used to correct the non-word errors. The LSTM model with attention mechanism and edit distance is used to detect and correct the context based spelling error. To conduct the experiment, 55440 sentences were used, of which 90% were for training (i.e., 49,896) and 10% were for testing (i.e., 5544). According to the experimental results, for an isolated spell checker, dictionary lookup with hashing achieved an accuracy of 93.05%, a recall of correct words of 91.51%, and a precision of incorrect words of 72.37% for detection. The encoder decoder model achieved a recall of 91.76% for corrections. For a context-sensitive spell checker, the LSTM model with attention and edit distance achieved an accuracy of 88.8%, recall of the correct word of 86.18%, and precision of the incorrect word of 62.84% for detection. It achieved a recall of 74.28% for the correction. The results of the experiment show that the model used to detect and correct both non-word and real-word spelling errors in Sidaamu Afoo’s written text performed well. Finally, to improve the performance of the model, we recommend using additional data set and a state-of-the-art transformer model.Item TRAINEE PERFORMANCE PREDICTION MODEL FOR HAWASSA POLYTECHNIC COLLEGE USING KDP(Hawassa University, 2022-03-05) TIZITA G/SILASSIEData mining is the key tool for discovery of knowledge from large data set. It is this technology in most of the educational organization of the world currently helping to know the organization data explicitly and pave the way to produce quality citizens. Unlike other sectors the power of data mining is not much exploited in educational sector. Although there are studies regarding academic performance of students using data mining techniques, they are all about university students. we cant find academic performance of trainee research in Technical and vocational Institute. Thus, the purpose of this study is to develop Trainee performance prediction model for Hawassa polytechnic college. A total of 8200 records with 13 attributes were collected from Hawassa Polytechnic College registrar data set of the past 5 years ranging from 2009 to 2013 E.C. An experiment has been conducted using the Knowledge Discovery Process (KDP) Model using WEKA software version 3.8.4. Four data mining algorithms namely J48 Decision Trees, JRip rules induction, Naïve Bayes and PART with seven experiments (J48 Pruned and Un-pruned decision tree algorithm, Naive Bayes classifier, JRIP Pruned and Un-pruned and PART Pruned and Un pruned) were used to develop trainees performance predictive model. All the experiments were carried out with the same dataset and evaluated with 10-fold cross validation, 80% and 66% split test parameters. The study shows PART Un-pruned 10-fold cross validation test has the highest accuracy with 95.4268% and attributes such as trade/occupation, EGSECE, transcript, level, sex, English, and sector can be used at a time of decision making as they have shown strong prediction power which can help to predict trainees performance. Finally the researcher develop a prototype based on the rules generated from the selected algorithm.Item IMAGE BASED BARLEY LEAF FUNGAL DISEASE DETECTION USING CONVOLUTIONAL NEURAL NETWORK(Hawassa University, 2022-03-06) MAMO GUDISABarley is one of the most grown crops in the east Arsi, west Arsi, and Bale zones of Oromia Region. It is one of the main sources of food and income in these and other areas of Ethiopia. However, barley crop was affected by fungal disease which reduces the production and main cause for the economic losses in agricultural industries in Ethiopia. For the betterment of human health, fungal diseases in leaf of barley crops must be controlled and effectively monitored. The earlier researchers have used hand-crafted-features for image classification and recognition with machine learning approach. Nowadays, the development in Deep Learning has allowed researchers to drastically improve the accuracy of object detection and classification. In this thesis, the researcher used a deep-learning approach which is Convolutional Neural Network algorithm to detect fungal disease of barley crop using leaf images collected from Arsi zone of Kulumsa research center and other images captured directly from different Barley farms. The researcher dataset contains two categories of barley crop: leaf rust and normal. The dataset contains 10,224 healthy and diseased images. From this, 80% of the images are used for training and the rest for testing the model. During training, data augmentation is used to generate more images to fit the proposed model. Additionally, many researchers agree that using data augmentation can also increase the performance of the model. The designed model is trained and tested using the collected dataset and compared with two pre-trained convolutional neural network models namely Mobile Net and InceptionV3. The model obtained 99.53% accuracy and it can be used as a practical tool for farmers to protect barley crop, against fungal diseasesItem DEEP LEARNING BASED FABA BEANS LEAF DISEASES DETECTION AND CLASSIFICATION(Hawassa University, 2022-03-08) MARTHA MEZGEBU HAILUFaba bean (Vicia Faba L.) is believed to be originated from the Near East and now days spread throughout the world. It’s one of the most domesticly legume in the world next to chickpea and pea. Ethiopia is the second leading producer of Faba beans next to China in the world. It shares 6.96% of world production and 40.5% with of Africa. Faba bean is grown primarily for its edible seeds that are used for human consumption. It also used for keeping human healthy and sustaining the productivity of the farming system through the fixation of nitrogen. However, most of the time it is affected by different diseases that result in reduction of quality and quantity of the Faba bean production. Those diseases are caused by fungus, virus, and bacteria. Usually Faba bean diseases appear on the leaf, flower, pods, seed, and stem a step by step and makes the crop out of usage. Mainly, leaf of Faba bean is more affected by diseases than other parts. It attacks both inside and outside of the leaves. Leaf plays an important role during the growing period of Faba bean. Without leaf there is no flower, without flower there is no pod, without pod there is no seed. Traditionally, farmers and experts detect and identify plant diseases by naked eyes. This method is inaccurate and expensive, because there are numerous diseases. Detection by using image processing techniques has been more accurate and fast. Therefore, we need to develop automatic deep learning based Faba bean leaf diseases detection and classification model. We designed Faba bean leaf disease model architecture using convolutional neural network for Faba bean leaf diseases detection and classification. CNN become accurate and precise method for the detection and classification of plant diseases. The study can be conducted in the plantation area of Faba bean in Oromia region, Arsi zone, D/Xijo Woreda, from the farmer plantation land particular reference to Bucho Silase kebele, Ethiopia, where the dataset has been collected. Leaves of healthy and infected crops are collected and labeled. Processing of image has been performed with pixel wise operations to enhance the image. It is followed with feature extraction the classification of patterns of captured leaves in order to identify Faba bean plant leaf diseases. Four classifier labels are used as ascochyta blight, chocolate spot- botrytis, rust, and Healthy leaf. The features extracted are fit into the neural network with the dataset was spilt into training set, validation set and testing set, 80%, 10%, and 10% respectively, with the batch size 32 and using Adam optimizer. Faba bean leaf diseases detection and classification model achieved the overall accuracy 99.58%
- «
- 1 (current)
- 2
- 3
- »
