User Tools

Site Tools


science_cases:gmap_science_cases:mounds

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
science_cases:gmap_science_cases:mounds [2022/02/17 09:49] adminscience_cases:gmap_science_cases:mounds [2022/02/22 12:06] (current) admin
Line 11: Line 11:
 ===== Description of the machine learning problem and our approach ===== ===== Description of the machine learning problem and our approach =====
  
-The training set consists of two DTMs, one used for training and the other for testing. In the first step, the training DTM is tiled into several smaller fixed sized images. The label masks are created based on the available ground-truth shape files. The images are then scaled to be in range [-1,1]. The training set is then split further into train and validation sets with an 80/20 ratio. The train set is augmented in the next step with image manipulations such as flipping, rotation, rescaling and so on to create a large training set for the segmentation task.+The training set consists of two DTMs, one used for training and the other for testing. In the first step, the training DTM is tiled into several smaller fixed sized images. The label masks are created based on the available ground-truth shape files. The images are then scaled to be in range [-1,1]. The training set is then split further into train and validation sets with an 80/20 ratio. The **train set is augmented** in the next step with image manipulations such as flipping, rotation, rescaling and so on to create a large training set for the segmentation task.
  
-For the initial image segmentation task, a standard UNet (Ronneberger et al., 2015) is trained using the training set. A mean IoU (Intersection over Union) of about 60 % on the validation set is obtained.  This result is consistent with another GAN based model, indicating a saturation in information present in the training set.+For the initial **image segmentation task**, a **standard UNet** (Ronneberger et al., 2015) is trained using the training set. A mean IoU (Intersection over Union) of about 60 % on the validation set is obtained. This result is consistent with another GAN based model, indicating a saturation in information present in the training set.
  
-Due to the limited number of samples to train from, we learn a Generative model (Goodfellow et al., 2020) to approximate the true distribution of the landforms. We generate an augmented set using this approach and train the image segmentation again, observing an improvement of about 10% in the IoU. This is an interesting result, as it indicates that the model can be used to simulate the mound terrains. The approximated distribution space should be then factorisable into a set of independent mechanisms, which could control factors of variation.+Due to the limited number of samples to train from, we learn a **Generative model ** (Goodfellow et al., 2020) to approximate the true distribution of the landforms. We generate an augmented set using this approach and train the image segmentation again, observing an improvement of about 10% in the IoU. This is an interesting result, as it indicates that the **model can be used to simulate the mound terrains**. The approximated distribution space should be then factorisable into a set of independent mechanisms, which could control factors of variation.
  
-A simulator of such likes can be used for controlled generation. Another advantage of latent space learning is that it can offer benefits in downstream tasks, which is an added advantage for storage and efficient searching. We have developed this simulator and we plan to disseminate the method as a publication in the coming months.+A simulator of such likes can be used for controlled generation. Another advantage of **latent space learning** is that it can offer benefits in downstream tasks, which is an added advantage for storage and efficient searching. We have developed this simulator and we plan to disseminate the method as a publication in the coming months.
  
-The ML pipeline is available on our GitHub repository.+Results of this science case were presented at the {{:wiki:egu2021-julka_etal.pdf|EGU21}}. The [[https://github.com/epn-ml/GMAP-mound-classification-|ML pipeline]] is available on our GitHub repository.
  
 **References:** **References:**
Line 25: Line 25:
    * Pozzobon, R., et al. (2019), Fluids mobilization in Arabia Terra, Mars: Depth of pressurized reservoir from mounds self-similar clustering, Icarus 321, 938, doi:10.1016/j.icarus.2018.12.023    * Pozzobon, R., et al. (2019), Fluids mobilization in Arabia Terra, Mars: Depth of pressurized reservoir from mounds self-similar clustering, Icarus 321, 938, doi:10.1016/j.icarus.2018.12.023
   * De Toffoli, B., et al. (2019), Surface Expressions of Subsurface Sediment Mobilization Rooted into a Gas Hydrate-Rich Cryosphere on Mars, Scientific Reports 9, 8603, doi:10.1038/s41598-019-45057-7   * De Toffoli, B., et al. (2019), Surface Expressions of Subsurface Sediment Mobilization Rooted into a Gas Hydrate-Rich Cryosphere on Mars, Scientific Reports 9, 8603, doi:10.1038/s41598-019-45057-7
-  * Ronneberger, O., et al. (2015), U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. https://doi.org/10.1007/978-3-319-24574-4_28+  * Ronneberger, O., et al. (2015), U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. [[https://doi.org/10.1007/978-3-319-24574-4_28|https://doi.org/10.1007/978-3-319-24574-4_28]]
   * Goodfellow, I., et al. (2020), Generative adversarial networks, Communications of the ACM 63, doi:10.1145/3422622   * Goodfellow, I., et al. (2020), Generative adversarial networks, Communications of the ACM 63, doi:10.1145/3422622
  
  
science_cases/gmap_science_cases/mounds.1645087762.txt.gz · Last modified: 2022/02/17 09:49 by admin