Mostrar el registro sencillo del ítem
Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
dc.contributor.advisor | Arizmendi Pereira, Carlos Julio | |
dc.contributor.author | Yarce Herrera, Jeison Ivan | |
dc.coverage.spatial | Colombia | spa |
dc.date.accessioned | 2023-03-07T19:12:15Z | |
dc.date.available | 2023-03-07T19:12:15Z | |
dc.date.issued | 2022 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12749/19201 | |
dc.description.abstract | En el presente estudio se propone analizar un dispositivo que permita estimular los frutos maduros discriminando el movimiento de los frutos verdes en frecuencias muy específicas de movimiento. La tecnología del dispositivo se propone sobre la base de un dispositivo acústico, con una técnica que permita focalizar la energía mediante arreglos de ondas armónica que deben ser controladas para excitar solo frutos maduros. Por lo tanto, una oportunidad es ampliamente observada en el estudio de la subestructura fruto pedúnculo para determinar en los diferentes estados de maduración índices dinámicos que favorezcan el desprendimiento de frutos maduros. | spa |
dc.description.tableofcontents | 1 Introducción ............................................................................................................................1 1.1 Motivación ..........................................................................................................................1 1.2 Objetivos .............................................................................................................................3 2 Estado del arte.........................................................................................................................5 2.1 Introducción ........................................................................................................................5 2.2 Estado del arte en detección de objetos...............................................................................6 2.2.1 R-CNN ......................................................................................................................6 2.2.2 Fast R-CNN...............................................................................................................7 2.2.3 Faster R-CNN............................................................................................................9 2.2.4 YOLO......................................................................................................................10 2.3 Estado del arte en cosecha selectiva..................................................................................11 2.3.1 Vibraciones mecánicas............................................................................................11 2.3.2 Técnicas visuales.....................................................................................................12 2.3.3 Propiedades de la fruta del café...............................................................................13 2.3.4 Análisis armónico....................................................................................................15 2.4 Principales tecnologías utilizadas......................................................................................16 2.4.1 Python......................................................................................................................16 2.4.2 Pytorch ....................................................................................................................16 2.4.3 OpenCV...................................................................................................................17 2.4.4 Pandas......................................................................................................................18 2.4.5 Numpy.....................................................................................................................19 3 Redes neuronales artificiales.................................................................................................20 3.1 Introducción ......................................................................................................................20 3.2 La neurona biológica.........................................................................................................20 3.3 La neurona artificial ..........................................................................................................21 3.4 Estructura de las redes neuronales.....................................................................................23 3.4.1 Redes de tipo feed-forward .....................................................................................23 3.4.2 Redes de tipo recurrente ..........................................................................................25 3.4.3 Redes de tipo residual..............................................................................................26 3.5 Tipos de aprendizaje en las redes neuronales....................................................................27 3.5.1 Aprendizaje supervisado .........................................................................................28 3.5.2 Aprendizaje no supervisado ....................................................................................29 3.5.3 Aprendizaje semi-supervisado o hibrido.................................................................29 3.5.4 Aprendizaje por refuerzo.........................................................................................30 3.6 Métodos de aprendizaje.....................................................................................................30 3.6.1 Función de coste......................................................................................................31 3.6.2 Descenso del gradiente............................................................................................32 3.6.3 Descenso estocástico de gradiente ..........................................................................34 3.6.4 Propagación hacia atrás...........................................................................................35 3.7 Medidas de prevención de sobreajuste ..............................................................................36 3.8 Redes neuronales convolucionales....................................................................................38 3.8.1 Capa de convolución ...............................................................................................39 3.8.2 Capa de pooling.......................................................................................................41 3.8.3 Capa softmax...........................................................................................................42 4 Detector de estados de maduración.......................................................................................43 4.1 Introducción ......................................................................................................................43 4.1.1 Funcionamiento general del sistema .......................................................................43 4.2 Módulo de detección y clasificación .................................................................................44 5 Configuración sistema acústico ............................................................................................47 5.1 Introducción ......................................................................................................................47 5.2 Dispositivos.......................................................................................................................48 5.2.1 TURBOSOUND IQ15 CABINA ACTIVA 15" TURBOSOUND............................48 5.2.2 micrófono de medición Behringer ecm8000..............................................................48 5.2.3 Interfaz De Audio Usb Presonus Studio 24c .............................................................48 5.2.4 sensor piezo eléctrico.................................................................................................48 5.2.5 sonómetro uni-t ut353................................................................................................49 5.3 Etapa de calibración ..........................................................................................................49 5.4 Diseño de soporte ..............................................................................................................50 5.4.1 Diseño de soporte.......................................................................................................50 5.4.2 Manufactura soporte ..................................................................................................51 5.5 Montaje..............................................................................................................................51 5.6 Simulaciones.....................................................................................................................52 6 Resultados.............................................................................................................................55 6.1 Introducción ......................................................................................................................5 | spa |
dc.format.mimetype | application/pdf | spa |
dc.language.iso | spa | spa |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/2.5/co/ | * |
dc.title | Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia | spa |
dc.title.translated | Development of a study for the implementation of selective harvesting of arabica coffee by applying high frequency vibrations | spa |
dc.degree.name | Ingeniero Mecatrónico | spa |
dc.publisher.grantor | Universidad Autónoma de Bucaramanga UNAB | spa |
dc.rights.local | Abierto (Texto Completo) | spa |
dc.publisher.faculty | Facultad Ingeniería | spa |
dc.publisher.program | Pregrado Ingeniería Mecatrónica | spa |
dc.description.degreelevel | Pregrado | spa |
dc.type.driver | info:eu-repo/semantics/bachelorThesis | |
dc.type.local | Trabajo de Grado | spa |
dc.type.coar | http://purl.org/coar/resource_type/c_7a1f | |
dc.subject.keywords | Mechatronic | spa |
dc.subject.keywords | Coffee harvest | spa |
dc.subject.keywords | Arabica coffee | spa |
dc.subject.keywords | Ripe fruits | spa |
dc.subject.keywords | Deep learning | spa |
dc.subject.keywords | Artificial intelligence | spa |
dc.subject.keywords | Technological innovations | spa |
dc.subject.keywords | Agricultural innovations | spa |
dc.subject.keywords | Process development | spa |
dc.subject.keywords | Algorithms | spa |
dc.identifier.instname | instname:Universidad Autónoma de Bucaramanga - UNAB | spa |
dc.identifier.reponame | reponame:Repositorio Institucional UNAB | spa |
dc.type.hasversion | info:eu-repo/semantics/acceptedVersion | |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | spa |
dc.relation.references | [1] L. Figueiredo, I. Jesus, J. A. T. Machado, J. R. Ferreira, and J. L. Martins de Carvalho, “Towards the development of intelligent transportation systems,” in ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), 2001, pp. 1206– 1211. | spa |
dc.relation.references | [2] F. Torres, G. Barros, and M. J. Barros, “Computer vision classifier and platform for automatic counting: More than cars,” 2017 IEEE 2nd Ecuador Tech. Chapters Meet. ETCM 2017, pp. 1–6, 2017. | spa |
dc.relation.references | [3] E. Soroush, A. Mirzaei, and S. Kamkar, “Near Real-Time Vehicle Detection and Tracking in Highways,” no. March 2017, p. 11, 2016 | spa |
dc.relation.references | [4] A.-O. Fulop and L. Tamas, “Lessons learned from lightweight CNN based object recognition for mobile robots,” 2018. | spa |
dc.relation.references | [5] P. Abrahamsson et al., “Affordable and energy-efficient cloud computing clusters: The Bolzano Raspberry Pi cloud cluster experiment,” Proc. Int. Conf. Cloud Comput. Technol. Sci. CloudCom, vol. 2, pp. 170–175, 2013 | spa |
dc.relation.references | [6] D. Pena, A. Forembski, X. Xu, and D. Moloney, “Benchmarking of CNNs for Low-Cost, Low-Power Robotics Applications,” 2017. | spa |
dc.relation.references | [7] J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 3296–3305, 2017. | spa |
dc.relation.references | [8] Ministerio del Trabajo, “Incremento del Salario Básico Unificado 2019.,” 2018. [Online]. Available: http://www.trabajo.gob.ec/incremento-del-salario-basico-unificado-2019/. [Accessed: 10-Sep-2019] | spa |
dc.relation.references | [9] Z. Chen, T. Ellis, and S. A. Velastin, “Vehicle detection, tracking and classification in urban traffic,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 951–956, 2012. | spa |
dc.relation.references | [10] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016 | spa |
dc.relation.references | [11] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Con | spa |
dc.relation.references | [12] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018 | spa |
dc.relation.references | [13] W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016. | spa |
dc.relation.references | [14] R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440– 1448, 2015. | spa |
dc.relation.references | [15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017. | spa |
dc.relation.references | [16] M. J. Shaifee, B. Chywl, F. Li, and A. Wong, “Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video,” J. Comput. Vis. Imaging Syst., vol. 3, no. 1, 2017 | spa |
dc.relation.references | [17] L. Zhang, L. Lin, X. Liang, and K. He, “Is faster R-CNN doing well for pedestrian detection?,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9906 LNCS, pp. 443–457, 2016 | spa |
dc.relation.references | [18] Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding,” MM 2014 - Proc. 2014 ACM Conf. Multimed., vol. abs/1506.0, pp. 675–678, 2014 | spa |
dc.relation.references | [19] P. B. Martín Abadi et al., “TensorFlow: A system for large-scale machine learning,” Methods Enzymol., 2016. | spa |
dc.relation.references | [20] Y. LeCun et al., “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551, 1989. | spa |
dc.relation.references | [21] Chuanqi305, “MobileNet-SSD,” 2019. [Online]. Available: https://github.com/chuanqi305/MobileNet-SSD. [Accessed: 10-Sep- 2019]. | spa |
dc.relation.references | [22] A. Rosebrock, “Real-time object detection on the Raspberry Pi with the Movidius NCS,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/02/19/real-time-object detection-on-the-raspberry-pi-with-the-movidius-ncs/. [Accessed: 10-Sep-2019 | spa |
dc.relation.references | [23] Taehoonlee, “High level network definitions with pre-trained weights in TensorFlow,” 2019. [Online]. Available: https://github.com/taehoonlee/tensornets. [Accessed: 10-Sep 2019]. | spa |
dc.relation.references | [24] K. Hyodo, “YoloV3,” 2019. [Online]. Available: https://github.com/PINTO0309. [Accessed: 10-Sep-2019]. | spa |
dc.relation.references | [25] J. Redmon and A. Farhadi, “Tiny YOLOv3,” 2018. [Online]. Available: https://pjreddie.com/darknet/yolo/. [Accessed: 10-Sep- 2019]. | spa |
dc.relation.references | [26] J. Hui, “SSD object detection: Single Shot MultiBox Detector for real-time processing,” 2018. [Online]. Available: https://medium.com/@jonathan_hui/ssd-object-detection single-shot-multibox-detector-for-real-time-processing- 9bd8deac0e06. [Accessed: 10-Sep 2019]. | spa |
dc.relation.references | [27] CyberAILab, “A Closer Look at YOLOv3,” 2018. [Online]. Available: https://www.cyberailab.com/home/a-closer-look-at-yolov3. [Accessed: 10-Sep-2019]. | spa |
dc.relation.references | [28] T. Y. Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014 | spa |
dc.relation.references | [29] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010 | spa |
dc.relation.references | [30] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012),” 2012. [Online]. Available: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/. [Accessed: 10-Sep-2019]. | spa |
dc.relation.references | [31] A. Rosebrock, “Simple object tracking with OpenCV,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/07/23/simple object-tracking-with-opencv/. [Accessed: 10-Sep-2019]. | spa |
dc.relation.references | [32] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” BMVC 2014 - Proc. Br. Mach. Vis. Conf. 2014, 2014 | spa |
dc.relation.references | [34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep- | spa |
dc.relation.references | [34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep- 88 learning-cv. [Accessed: 10-Oct-2019]. | spa |
dc.relation.references | [35] Intel, “Intel® Neural Compute Stick 2,” 2019. [Online]. Available: https://software.intel.com/en-us/neural-compute-stick. [Accessed: 10-Oct-2019]. | spa |
dc.relation.references | [36] Raspberry Pi, “Raspberry Pi 3 Model B+,” 2018. [Online]. Available: https://www.raspberrypi.org/products/raspberry-pi-3-model-b- plus/. [Accessed: 10-Oct 2019]. | spa |
dc.relation.references | [37] Python, “About Python,” 2019. [Online]. Available: https://www.python.org/about/. [Accessed: 10-Oct-2019]. | spa |
dc.relation.references | [38] A. Rosebrock, “Faster video file FPS with cv2.VideoCapture and OpenCV,” 2017. [Online]. Available: https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2- videocapture-and-opencv/. [Accessed: 10-Oct-2019]. | spa |
dc.contributor.cvlac | Arizmendi Pereira, Carlos Julio [0001381550] | spa |
dc.contributor.googlescholar | Arizmendi Pereira, Carlos Julio [JgT_je0AAAAJ] | spa |
dc.contributor.orcid | Arizmendi Pereira, Carlos Julio | spa |
dc.contributor.scopus | Arizmendi Pereira, Carlos Julio [16174088500] | spa |
dc.contributor.researchgate | Arizmendi Pereira, Carlos Julio [Carlos_Arizmendi2] | spa |
dc.subject.lemb | Mecatrónica | spa |
dc.subject.lemb | Inteligencia artificial | spa |
dc.subject.lemb | Innovaciones tecnológicas | spa |
dc.subject.lemb | Innovaciones agrícolas | spa |
dc.subject.lemb | Desarrollo de procesos | spa |
dc.subject.lemb | Algoritmos | spa |
dc.identifier.repourl | repourl:https://repository.unab.edu.co | spa |
dc.description.abstractenglish | In the present study it is proposed to analyze a device that allows the stimulation of ripe fruits by discriminating the movement of green fruits in very specific frequencies of movement. The device technology is proposed on the basis of an acoustic device, with a technique that allows energy to be focused through harmonic wave arrangements that must be controlled to excite only ripe fruits. Therefore, an opportunity is widely observed in the study of the fruit peduncle substructure to determine in the different stages of maturation dynamic indices that favor the detachment of ripe fruits. | spa |
dc.subject.proposal | Cosecha de café | spa |
dc.subject.proposal | Café arábica | spa |
dc.subject.proposal | Frutos maduros | spa |
dc.subject.proposal | Aprendizaje profundo | spa |
dc.type.redcol | http://purl.org/redcol/resource_type/TP | |
dc.rights.creativecommons | Atribución-NoComercial-SinDerivadas 2.5 Colombia | * |
dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa | spa |
dc.contributor.apolounab | Arizmendi Pereira, Carlos Julio [carlos-julio-arizmendi-pereira] | spa |
dc.coverage.campus | UNAB Campus Bucaramanga | spa |
dc.description.learningmodality | Modalidad Presencial | spa |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
-
Ingeniería Mecatrónica [294]