Articles, Conference and Workshop Papers Collection

Permanent URI for this collectionhttp://10.10.97.169:4000/handle/123456789/4223

Browse

Recent Submissions

Now showing 1 - 14 of 14
  • Item
    Robust edge detection method for the segmentation of diabetic foot ulcer images
    (2020-08) Mwawado, Rehema
    Segmentation is an open-ended research problem invarious computer vision and image processing tasks. This pre-processing operation requires a robust edge detector to generateappealing results. However, the available approaches for edgedetection underperform when applied to images corrupted by noise or impacted by poor imaging conditions. The problembecomes significant for images containing diabetic foot ulcers, which originate from people with varied skin color. Comparative performance evaluation of the edge detectors facilitates the process of deciding an appropriate method for image segmentation of diabetic foot ulcers. Our research discovered that the classical edge detectors cannot clearly locate ulcers in images with black-skin feet. In addition, these methods collapse for degraded input images. Therefore, the current research proposes a robust edge detector that can address some limitationsof the previous attempts. The proposed method incorporates a hybrid diffusion-steered functional derived from the total variation and the Perona-Malik diffusivities, which have been reported to can effectively capture semantic features in images. The empirical results show that our method generates clearer and stronger edge maps with higher perceptual and objective qualities. More importantly, the proposed method offers lower computational times—an advantage that gives more insights into the possible application of the method in time-sensitive tasks.
  • Item
    Likelihood of adopting briquette technology in abundance of competitive energy sources: a case study of Morogoro urban and rural districts, Tanzania
    (Research Gate, 2022) Yustas, Y. M.; Tarimo, W. M.; Mbacho, S. A.; Kiobia, D. O.; Makange, N. R.; Kashaija, A. T.; Silungwe, F. R.
    Firewood and charcoal are the primary energy resources in many developing countries, especially in sub-Saharan Africa. However, the unstainable collection and use of these resources negatively impact the environment. Equally, using briquettes as green energy resources can address the energy shortage and conserves the environment. However, the information on people’s preference to use briquettes instead of other alternative energy sources is scarce. Furthermore, studies demonstrating the briquette technology preferences and adoption to prospective users, including youth and women in urban and rural areas, are limited. Therefore, this study was conducted in the Morogoro district to (1) characterise the respondents’ demographic issues useful for evaluation of people's preferences, (2) assess the preference for briquette fuels, particularly for youth and women, and (3) evaluate the extent of using the briquettes as sources of energy as compared to other alternative sources of energy. The household survey involved 330 respondents in urban, peri-urban, and rural areas of Morogoro. The areas were chosen to represent the Tanzania sceneries. Besides, supplementary key informants’ interviews involved village leaders, charcoal retailers and other people with knowledge of briquette technology. The results show that over 95% of respondents preferred to use briquette as an alternative energy source and expressed their willingness to engage in the briquette business. Additionally, the study shows low use of briquettes compared to other energy sources like charcoal and firewood in urban, peri-urban, and rural areas.Furthermore, there was no significant difference between men and women in their willingness to join the briquette business (p-value =0.517). Therefore, a few people are aware of briquette technology. This study recommends increasing the awareness of briquette technology through training youths and women on briquette technology and insisting on the availability of briquette products and stoves. In addition, assessing the factors hindering the briquettes from being a hundred per cent preferred by people is a point of research interest.
  • Item
    An extensive review of mobile agricultural robotics for field operations: focus on cotton harvesting
    (MDPI, 2020-03-04) Kadeghe, G. Fue; Barnes, Edward M.; Rains, Glen C; Porter, Wesley M
    In this review, we examine opportunities and challenges for 21st-century robotic agricultural cotton harvesting research and commercial development. The paper reviews opportunities present in the agricultural robotics industry, and a detailed analysis is conducted for the cotton harvesting robot industry. The review is divided into four sections: (1) general agricultural robotic operations, where we check the current robotic technologies in agriculture; (2) opportunities and advances in related robotic harvesting fields, which is focused on investigating robotic harvesting technologies; (3) status and progress in cotton harvesting robot research, which concentrates on the current research and technology development in cotton harvesting robots; and (4) challenges in commercial deployment of agricultural robots, where challenges to commercializing and using these robots are reviewed. Conclusions are drawn about cotton harvesting robot research and the potential of multipurpose robotic operations in general. The development of multipurpose robots that can do multiple operations on different crops to increase the value of the robots is discussed. In each of the sections except the conclusion, the analysis is divided into four robotic system categories; mobility and steering, sensing and localization, path planning, and robotic manipulation.
  • Item
    Evaluation of a stereo vision system for cotton row detection and boll location estimation in direct sunlight
    (MDPI, 2020-08-05) Kadeghe, Fue; Wesley, Porter; Edward, Barnes; Changying, Li; Glen, Rains
    Cotton harvesting is performed by using expensive combine harvesters which makes it difficult for small to medium-size cotton farmers to grow cotton economically. Advances in robotics have provided an opportunity to harvest cotton using small and robust autonomous rovers that can be deployed in the field as a “swarm” of harvesters, with each harvester responsible for a small hectarage. However, rovers need high-performance navigation to obtain the necessary precision for harvesting. Current precision harvesting systems depend heavily on Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS) to navigate rows of crops. However, GNSS cannot be the only method used to navigate the farm because for robots to work as a coordinated multiagent unit on the same farm because they also require visual systems to navigate, avoid collisions, and to accommodate plant growth and canopy changes. Hence, the optical system remains to be a complementary method for increasing the efficiency of the GNSS. In this study, visual detection of cotton rows and bolls was developed, demonstrated, and evaluated. A pixel-based algorithm was used to calculate and determine the upper and lower part of the canopy of the cotton rows by assuming the normal distribution of the high and low depth pixels. The left and right rows were detected by using perspective transformation and pixel-based sliding window algorithms. Then, the system determined the Bayesian score of the detection and calculated the center of the rows for the smooth navigation of the rover. This visual system achieved an accuracy of 92.3% and an F1 score of 0.951 for the detection of cotton rows. Furthermore, the same stereo vision system was used to detect the location of the cotton bolls. A comparison of the cotton bolls’ distances above the ground to the manual measurements showed that the system achieved an average R2 value of 99% with a root mean square error (RMSE) of 9 mm when stationary and 95% with an RMSE of 34 mm when moving at approximately 0.64 km/h. The rover might have needed to stop several times to improve its detection accuracy or move more slowly. Therefore, the accuracy obtained in row detection and boll location estimation is favorable for use in a cotton harvesting robotic system. Future research should involve testing of the models in a large farm with undefoliated plants.
  • Item
    Autonomous navigation of a center-articulated and Hydrostatic transmission rover using a modified Pure pursuit algorithm in a cotton field
    (MDPI, 2020-08-07) Kadeghe, Fue; Wesley, Porter; Edward, Barnes; Changying, Li; Glen, Rains
    This study proposes an algorithm that controls an autonomous, multi-purpose, center-articulated hydrostatic transmission rover to navigate along crop rows. This multi-purpose rover (MPR) is being developed to harvest undefoliated cotton to expand the harvest window to up to 50 days. The rover would harvest cotton in teams by performing several passes as the bolls become ready to harvest. We propose that a small robot could make cotton production more profitable for farmers and more accessible to owners of smaller plots of land who cannot afford large tractors and harvesting equipment. The rover was localized with a low-cost Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS), encoders, and Inertial Measurement Unit (IMU)s for heading. Robot Operating System (ROS)-based software was developed to harness the sensor information, localize the rover, and execute path following controls. To test the localization and modified pure-pursuit path-following controls, first, GNSS waypoints were obtained by manually steering the rover over the rows followed by the rover autonomously driving over the rows. The results showed that the robot achieved a mean absolute error (MAE) of 0.04 m, 0.06 m, and 0.09 m for the first, second and third passes of the experiment, respectively. The robot achieved an MAE of 0.06 m. When turning at the end of the row, the MAE from the RTK-GNSS-generated path was 0.24 m. The turning errors were acceptable for the open field at the end of the row. Errors while driving down the row did damage the plants by moving close to the plants’ stems, and these errors likely would not impede operations designed for the MPR. Therefore, the designed rover and control algorithms are good and can be used for cotton harvesting operations.
  • Item
    Center-articulated hydrostatic cotton harvesting Rover using visual-servoing control and a finite state machine
    (MDPI, 2020-07-30) Kadeghe, Fue; Wesley, Porter; Edward, Barnes; Changying, Li; Glen, Rains
    Multiple small rovers can repeatedly pick cotton as bolls begin to open until the end of the season. Several of these rovers can move between rows of cotton, and when bolls are detected, use a manipulator to pick the bolls. To develop such a multi-agent cotton-harvesting system, each cotton-harvesting rover would need to accomplish three motions: the rover must move forward/backward, turn left/right, and the robotic manipulator must move to harvest cotton bolls. Controlling these actions can involve several complex states and transitions. However, using the robot operating system (ROS)-independent finite state machine (SMACH), adaptive and optimal control can be achieved. SMACH provides task level capability for deploying multiple tasks to the rover and manipulator. In this study, a center-articulated hydrostatic cotton-harvesting rover, using a stereo camera to locate end-effector and pick cotton bolls, was developed. The robot harvested the bolls by using a 2D manipulator that moves linearly horizontally and vertically perpendicular to the direction of the rover’s movement. We demonstrate preliminary results in an environment simulating direct sunlight, as well as in an actual cotton field. This study contributes to cotton engineering by presenting a robotic system that operates in the real field. The designed robot demonstrates that it is possible to use a Cartesian manipulator for the robotic harvesting of cotton; however, to reach commercial viability, the speed of harvest and successful removal of bolls (Action Success Ratio (ASR)) must be improved.
  • Item
    Visual row detection using pixel-based algorithm and stereo camera for cotton-picking robot
    (2019 ASABE Annual International Meeting,  Boston, Massachusetts, 2019) Fue, K. G.; Georgia, Tifton; Porter, W. M.; Barnes, E. M.; Rains, G. C.
    Precision farming still depends heavily on RTK-GPS to navigate the rows of crops. However, GPS cannot be the only method to navigate the farm for robots to work as a “swarm” on the same farm; they also require visual systems to navigate and avoid collisions. Also, plant growth and canopy changes are not accommodated. Hence, the visual system remains a complementary method to add to the efficiency of the GPS system. In this study, optical detection of cotton rows is investigated and demonstrated. A stereo camera is used to detect the row depth, and then, a pixel- based algorithm is used to calculate and determine the upper and lower part of the canopy of the cotton rows by assuming the normal distribution of the high and low pixels. The left and right row are detected by using perspective transform and pixel-based sliding window algorithms. Then, the system determines the Bayesian score of the detection and calculates the center of the rows for smooth navigation of the cotton-picking robot. The 92.3% accuracy and F1 score of 0.951 are sufficient to deploy the algorithm for robotic operations. The deployment and testing of the robot navigation will be done in 2019.
  • Item
    Visual inverse kinematics for cotton picking robot
    (2019 Beltwide Cotton Conferences, New Orleans, Louisiana., 2019) Fue, K. G.; Tifton, Georgia; Porter, W. M.; Barnes, E. M.; Rains, G. C.
    Fast cotton picking requires a fast-moving arm. The Cartesian arm remains the most simple and quick moving arm compared to other configurations. In this study, an investigation of the 2D Cartesian arm controlled with a stepper- drive is investigated. The arm is designed and mounted to a research rover. Two stereo cameras are installed and used to take the images of the cotton plants in two different angles. One camera is directly pointing downward while the other camera is pointing perpendicular to the row. This configuration allows the robot to view the cotton plants and bolls. The robot arm can move upward and downward or left and right. The rover uses two linear servos connected to a variable displacement pump swashplate for powering four hydraulic wheel motors and the engine accelerator linkage to move forward. The forward and backward movement of the rover makes the cotton-picking robot arm movement 3-dimensional. The downward camera gives feedback to the robotic system on the position of the arm. The rover moves forward along the row and stops whenever the cotton boll is perpendicular to the cartesian arm. The sideways camera gives an alternative view of the cotton boll that allows the robot servos to stop accurately. The arm uses vacuum suction to pick the cotton bolls. The vacuum suction end effector is mounted on the arm and pointing perpendicular to the row. In this paper, the kinematics and movement of the cotton arm and boll picking are demonstrated.
  • Item
    Real-time 3-D measurement of cotton boll positions using machine vision under field conditions
    (2018 Beltwide Cotton Conferences, San Antonio, 2018-01) Fue, K. G.; Rains, G. C.; Porter, W. M.; Tifton, G. A.
    Cotton harvesting is performed by expensive combine harvesters that hinder small to medium-size cotton farmers Advances in robotics provide an opportunity to harvest cotton using small and robust autonomous rovers that can be deployed in the field as an “army” of harvesters. This paradigm shift in cotton harvesting requires high accuracy 3D measurement of the cotton boll position under field conditions. This in-field high throughput phenotyping of cotton boll position includes real-time image acquisition, depth processing, color segmentation, features extraction and determination of cotton boll position. In this study, a 3D camera system was mounted on a research rover at 82° below the horizontal and took 720p images at the rate of 15 frames per second while the rover was moving over 2-rows of potted defoliated cotton plants. The software development kit provided by the camera manufacturer was installed and used to process and provide a disparity map of cotton bolls. The system was installed with the Robot Operating System (ROS) to provide live image frames to client computers wirelessly and in real time. Cotton boll distances from the ground were determined using a 4-step machine vision algorithm (depth processing, color segmentation, feature extraction and frame matching for position determination). The 3D camera used provided distance of the boll from the left lens and algorithms were developed to provide vertical distance from the ground and horizontal distance from the rover. Comparing the cotton boll distance above the ground with manual measurements, the system achieved an average R2 value of 99% with 9 mm RMSE when stationary and 95% with 34 mm RMSE when moving at approximately 0.64 km/h. This level of accuracy is favourable for proceeding to the next step of simultaneous localization and mapping of cotton bolls and robotic harvesting.
  • Item
    Visual control of cotton-picking Rover and manipulator using a ROS-independent finite state machine
    (2019 ASABE Annual International Meeting, 2019-07) Fue, Kadeghe; Barnes, Edward; Porter, Wesley; Rains, Glen
    Small rovers are being developed to pick cotton as bolls open. The concept is to have several of these rovers move between rows of cotton, and when bolls are detected, use a manipulator to pick the bolls. To accomplish this goal, each cotton-picking robot needs to accomplish three movements; rover must move forward/backward, left/right and the manipulator must be able to move to harvest the detected cotton bolls. Control of these actions can have several states and transitions. Transitions from one state to another can be complex but using ROS-independent finite state machine (SMACH), adaptive and optimal control can be achieved. SMACH provides task level capability to deploy multiple tasks to the rover and manipulator. In this research, a cotton-picking robot using a stereo camera to locate end-effector and cotton bolls is developed. The robot harvests the bolls using a 2D manipulator that moves linearly horizontally and vertically. The boll 3-D position is determined by calculating stereo camera parameters, and the decision of the finite state machine guides the manipulator and the rover to the destination. PID control is deployed to control rover movement to the boll. We demonstrate preliminary results in a direct-sun simulated environment. The system achieved a picking performance of 17.3 seconds per boll. Also, it covered the task by navigating at a speed of 0.87 cm per second collecting 0.06 bolls per second. In each mission, the system was able to detect all the bolls but one.
  • Item
    Evaluating the effect of planter downforce and seed vigor on crop emergence and yield in Hill-drop vs Singulated Cotton.
    (2018 Beltwide Cotton Conferences, San Antonio, 2018-01) Virk, S. S.; Porter, W. M.; Fue, K. G.; Snider, J. L.; Whitaker, J.
    Selection of correct planting parameters and their optimization based on current field conditions is crucial in achieving high crop emergence, which can translate to higher yields. A study was conducted during 2017 to evaluate the effect of planter downforce and seed vigor on crop emergence and yield in two cotton varieties planted with singulated and hill-drop seed plates. For this study, two cotton varieties (a small seeded low vigor variety and a large seeded high vigor variety) were planted at 1-inch seeding depth using two different planters to obtain singulated and hill-drop planting conditions. Two seeding rates of 29,000 seeds/ac and 42,500 seeds/ac were used to represent a typical low and high population for planting cotton in Georgia. Planter downforce treatments consisting of low, medium and high downforce values (100, 200 and 300 lbs., respectively) were implemented using the available downforce technology on both planters. Field data collection consisted of emergence counts at one and three weeks after planting and yield data from the center two rows of a four row plot at the end of the season. Data analysis indicated that singulated seeds were more effective in low downforce treatments independent of the crop variety. Hill-drop seeds exhibited better crop emergence (75-81%) in higher downforce treatments as compared to crop emergence (62-72%) obtained with singulated seeds. Yield data also suggested that singulated cotton can maximize emergence in low to medium downforce conditions for large seeded high vigor varieties whereas hill-drop cotton yields better with small seeded low vigor varieties planted at medium to high downforce. Results showed that low vigor varieties require higher seeding rates (more seeds per foot) when planted using low downforce to provide an overall high crop emergence rate whereas this trend was not observed in the high vigor variety. A comparison among seeding rates showed that higher seeding rates did not maximize crop emergence when planted as hill-drop. Overall results from this study emphasized the importance of using correct planting parameters (downforce, seeding rate, and variety) based on existing field conditions to maximize crop emergence and yield.
  • Item
    Field testing of the autonomous cotton harvesting Roverin undefoliated cotton field
    (2020 Beltwide Cotton Conferences, Austin, Texas, 2020) Fue, K. G.; Porter, W. M.; Tifton, G. A.; Barnes, E. M.; Cary, N. C.; Rains, G. C.
    This study proposes the use of an autonomous rover to harvest cotton bolls before defoliation and as the bolls open. This would expand the harvest window to up to 50 days and make cotton production more profitable for farmers by picking cotton before the quality is at risk. We developed a cotton harvesting rover that is a center-articulated vehicle with an x- y picking manipulator and a combination vacuum and rotating tines end-effector to pull bolls off the plant. The rover uses a stereo camera to see rows, RTK-GPS to localize itself, fisheye camera for heading, and stereo camera to locate the cotton bolls. The SMACH library is a ROS-independent task-level architecture used to build state machines for the rapid implementation of the robot behavior. First, the GPS waypoints are obtained, and then, the rover passes over the rows while picking the cotton bolls. The navigation is controlled by a modified pure-pursuit technique together with a PID controller. Two parallel programs organize the entire rover regarding when to pick and when to navigate. While navigating, the rover looks for harvestable bolls, and when bolls are discovered, the robot will stop and pick. It will do this repetitive work until it finishes all the rows. The rover navigation had an absolute error mean of 0.189 m, a median of 0.172 m, a standard deviation of 0.137 m, and a maximum of 0.986 m. The largest errors occurred during turning around at the end of rows and were caused by wet conditions and tire slippage. The rover picked cotton bolls at the average Action Success Ratio (ASR) of 78.5% and was able to reach 95% of the bolls. Most bolls that were not picked could not be pulled into the vacuum using the rotating tines on the end-effector.
  • Item
    Deep learning based Real-time GPU-accelerated tracking and counting of cotton bolls under field conditions using a moving camera
    (2018 ASABE Annual International Meeting, 2018-08) Fue, Kadeghe G.; Porter, Wesley; Rains, Glen
    Robotic harvesting involves navigation and environmental perception as first operations before harvesting of the bolls can commence. Navigation is the distance required for a harvester’s arm to reach the cotton boll while perception is the position of the boll relative to surrounding environment. These two operations give a 3D position of the cotton boll for picking and can only be achieved by detection and tracking of the cotton bolls in real-time. It means detection, tracking and counting of cotton bolls using a moving camera allows the robotic machine to harvest easily. GPU-accelerated deep neural networks were used to train the convolution networks for detection of cotton bolls. It was achieved by using pretrained tiny yolo weights and DarkFlow, a framework which translates YOLOv2 darknet neural networks to TensorFlow. A method to connect tracklets using vectors that are predicted using Lucas-Kanade algorithm and optimized using robust L-estimators and homography transformation is proposed. The system was tested in defoliated cotton plants during the spring of 2018. Using three video treatments, the counting performance accuracy was around 93% with standard deviation 6%. The system average processing speed was 21 fps in desktop computer and 3.9 fps in embedded system. Detection of the system achieved an accuracy and sensitivity of 93% while precision was 99.9% and F1 score was 1. The Tukey’s test showed that the system accuracy and sensitivity was the same when the plants were rearranged. This performance is crucial for real-time robot decisions that also measure yield while harvesting.
  • Item
    A simple convolutional neural network architecture for monitoring Tuta absoluta (Gelechiidae) infestation in tomato plants.
    (Sokoine University of Agriculture, 2021-05) Mourice, Sixbert K.; Mlebus, Festo Joseph; Fue, Kadeghe G.
    Tomato leaf miner (Tuta absoluta (Gelechiidae)) is a serious tomato insect pest in Tanzania, and its management or control still posess significant challenge. If left uncontrolled, the loss inflicted by the miner can be as high as 100%. Successful management of the pest may leverage on an integrated pest management (IPM) approach which, requires high throughput data on damage signs over space and time. This needs, in turn, a robust technique for pest monitoring. This study uses a deep learning technique to detect infestation symptoms of T. absoluta on tomato plants. The technique is rapid, automated and doesn’t require trained or experienced personnel. An experiment was carried out at Sokoine University of Agriculture (SUA), where two sets of tomato plants (cv. Asila F1) were planted in a screen house and in an open field. High-quality images of the tomato leaves were captured from both sets at seven days intervals for 70 days following transplanting. More images were collected from tomato gardens around Morogoro town. Collected images were labeled as being infested or non-infested. A simple convolution neural network (CNN) architecture with four convolution layers, three pooling layers, one flat layer and one dense layer, powered by Keras library and python’s Tensorflow backend, was developed in R-Software. The model accuracy was 90% on training and 82% on test data sets. This study suggests that the model can accurately identify T. absoluta infestation in tomato plants to a considerable extent. An in-depth discussion of the technique is provided in the paper.