ORCID Identifier(s)

0000-0002-4191-4883

Graduation Semester and Year

Fall 2024

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Mechanical Engineering

Department

Mechanical and Aerospace Engineering

First Advisor

Dr. Panos S. Shiakolas

Second Advisor

Dr. Paul Davidson

Third Advisor

Dr. Nora Ameri

Fourth Advisor

Dr. Christopher McMurrough

Fifth Advisor

Dr. Prashanth Ravi

Abstract

Additive Manufacturing (AM) has revolutionized the manufacturing industry by enabling the production of complex and customized components across a variety of applications. As the demand for complex and custom-designed parts grows, automated quality assurance processes are required to ensure that the final product meets the design specifications. However, differences in dimensions and shapes of internal features and external geometries from design specifications can affect the functional performance of the printed components. Traditional methods for quality assurance are reactive rather than proactive and typically identify issues after manufacture. In situ monitoring of each layer in real-time aims to provide immediate feedback on print quality, detecting potential defects and geometric differences. However, evaluating the geometric conformity of in-layer features remains a challenge due to low contrast between the features and the background, as well as textural variations in the background, imaging artifacts, and lighting conditions.

This research investigates into developing an in situ vision-based framework for AM, taking advantage of a custom developed image acquisition and processing environment to identify in-layer geometry features and determine their shape and dimensions. An image processing pipeline was developed to reduce noise, improve contrast, and improve the overall quality of the image. A Region of Interest (ROI) is established to align the as-printed and as-processed layer masks. Calibration methodologies are developed to ensure accurate measurements of dimensions and location of the segmented contours are obtained. The shape and dimensions of the as-printed layer are compared with the as-processed layer features to evaluate the geometrical differences between them to ensure that the manufactured part meets the required geometrical specifications. The effectiveness of the segmentation method used can significantly impact the accuracy and reliability of the feature recognition system. Several segmentation methods (simple thresholding, adaptive thresholding, Sobel edge detector, Canny edge detector, and watershed transform) are evaluated for their ability to detect high- and low-contrast in-layer features. While these methods are reliably able to segment high-contrast features, their performance is limited when segmenting low-contrast features. Based on this evaluation, a composite approach to effectively segment features by combining simple thresholding methods for high-contrast external features with the Chan-Vese (C-V) active contour model to identify low-contrast internal features is introduced. The effects of the C-V parameters (initial level set, intensity weighting factor, contour smoothness, and number of iterations) are evaluated to segment low-contrast internal features.

The framework was evaluated on a customized Fused Deposition Modeling (FDM) printer. The control system software for printing and imaging (acquisition and processing) was custom developed in Python running on a Raspberry Pi. Single layer and multilayer parts with different distinct shaped features (squares, triangles and circles) were printed to evaluate the framework. The segmentation performance of the composite method was compared with traditional methods with the results showing that the composite method scores higher in most metrics and was effective in segmenting high- and low-contrast features. The improved segmentation enabled the identification of feature geometric differences and shapes, ranging from 1 to 10 pixels verifying the ability of the framework to detect differences at the pixel level (= 0.025 mm) on the evaluation platform. The results demonstrate the potential of the proposed framework to segment features under different contrast and texture conditions, providing confidence towards ensuring geometric conformity of the printed features on a layer-by-layer basis in real-time. The research further explores reconstruction of the 3D representation of the as-printed part using acquired 2D layer images. The reconstructed 3D model can be used to perform functional assessments of the as-printed part. Part recovery strategies are discussed to provide initial guidelines based on the analysis of the as-printed part, to ensure that the part meets the design requirements and specifications.

Keywords

Additive Manufacturing, In-situ feature recognition, Image segmentation, Quality Control, Active Contour, Chan-Vese, Fused Deposition Modeling, Computational Metrology

Disciplines

Mechanical Engineering

License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Available for download on Friday, January 08, 2027

Share

COinS