Mock Accident 3D Model

 Mock Accident 3D Model


Overview: 

    The purpose of this operation is to compare the different image acquisition methods. There were 7 crews that would fly the exact same mock accident scene with different methods, then after the 3D models were created, we would be able to compare which methods are the best and see how they can be improved.

The Flight Operation:


    While in the field, we talked about the differences in resolution and how they compare to your altitude. The higher you go, the bigger your pixel size, which ultimately affects your resolution. We also talked about accuracy and precision. We increased precision with ground control points, the “Propeller Aeropoints” (Figure 29) to help with the 3D modeling in post-processing.


Figure 29: Propeller Aeropoints 


  1. I was in crew 7 which had 3 members: Myself, Cameron Dine, and Nick Phillips. Our mission was as follows:

    • Circle Orbits

    • Altitudes: 10m, 15m, 20m

    • Lens Angle: 10°, 20°, 30°

    • Overlap in 5° sections


  2. Before we could start the mission, we had to set our mission plan on Pix4D capture. Once the vehicles were in place, Cameron walked the drone to the center of the mock accident so that we knew where the center of the diameter for our circle orbits, then all we had to do was change the altitude and camera angle for each flight. We placed 4 ground control points in the flight area and the location of those can be seen by the numbered boxes (Figure 30). These points are called “Propeller Aeropoints” and they allow us to get more accurate location points for the data, which will give us a more accurate 3D model. The rectangles show the orientation of the vehicles.
    Figure 30: Sketch Of Operation Area

Flight Data: 

Conditions:                                                                                  
  • Clear Sky, No Clouds
  • Very Light Wind Out of the North
  • Approximately 75°F
  • Few Birds Present
Hazards:
  • Construction Work to the West
  • M210 Flying to the West
  • Multiple Crews Operating in the Area (16 People on Site)
  • Trees border Flight Area

Flight 1:
  • PIC: Hunter Donaldson
  • VO: Cameron Dine
  • Flight 1 was Conducted at 20m with a lens angle of 30°
  • Takeoff: 2:10pm
  • Landing: 2:16pm
  • Aircraft: DJI Mavic 2 Pro

Flight 2: 
  • PIC: Nick Phillips
  • VO: Hunter Donaldson
  • Flight 2 was Conducted at 10m with a lens angle of 10°
  • Takeoff: 2:42pm
  • Landing: 2:45pm
  • Aircraft: DJI Mavic 2 Pro

* The Aircraft Battery was Replaced after Flight 2 *

Flight 3:
  • PIC: Cameron Dine
  • VO: Nick Phillips
  • Flight 3 was Conducted at 15m with a lens angle of 20°
  • Takeoff: 2:55pm
  • Landing: 2:57pm
  • Aircraft: DJI Mavic 2 Pro

Processing The Data:

The first thing you want to do is upload your imagery data to Pix4D Mapper. You do this by selecting all the images you want and then uploading them to the software (Figure 31). 

Figure 31: Uploading Data To Software

The next step is to set the image properties. This includes the coordinate system, geolocation options, and camera model/settings (Figure 32).

Figure 32: Image Properties Page

Next, you want to select the output coordinate system. This means what GPS point formatting you'd like to use (Figure 33).

Figure 33: Selecting Output Coordinate System

The final step is to select the type of product you'd like the program to make. In this case, we are going to make a 3D model (Figure 34).

Figure 34: Selecting The Type Of Project


Results:    


Overall, our 3D model turned out very well. I think that is contributed to the data acquisition method we were assigned. We were the only crew to fly circle orbits around the accident site. I think that having data collected from a circular orbit will always give better results than just a grid pattern. Having circle orbits at multiple altitudes and lens angles allowed us to capture the sides, front, and back of each vehicle in greater detail. The North facing view of the 3D model (Figure 35) is a great example of this. The vehicles are very easily distinguishable and shown in detail.

    Another great feature of the acquisition method we used is the amount of detail that is shown. The trucks are recreated so well that you can distinguish the different makes and models, even though they look similar (Figure 36). In addition, the circular orbit method allowed us to capture a decent view of the surrounding area as well (Figure 37). Along with this, the ground control points are clearly visible allowing accurate coordinate system correction. Some examples of where this would be useful include: crime scene recreation, engineering, commercial advertising, construction, architecture, etc.

    However, there are drawbacks to only using one data acquisition method. Since we only collected data from around the scene, we are missing some detail from the top view of the model (Figure 38). Our model is very detailed around the outside of the scene, but not so much from the top, and both black trucks have some deformation on the tops. Another drawback of only using one method is there are a few shadows and dark areas because we are missing that top view (Figure 38).  


Figure 35: North Facing View 

Figure 36: West Facing View

Figure 37: Surrounding Area Of Mock Accident

Figure 38: Top-Down View Of Mock Accident


Conclusion:

    Overall, I think that our data acquisition method is a good choice if you just need a decent overview, just want to get a general idea, or you are in a time crunch to make your 3D model. If you wanted to get a high-quality, detailed model, you would want to combine the methods we used with a grid pattern NADIR data collection as well. That way you can capture the top, sides, and basically every angle in your 3D model.






Comments