From 5d83bc6e9188dbe6af04497f64ab59cfa83b3cb8 Mon Sep 17 00:00:00 2001
From: Maciej Wielgosz <maciej.wielgosz@nibio.no>
Date: Wed, 12 Oct 2022 10:20:48 +0200
Subject: [PATCH] updated of subsampling size

---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 34bd008..b126c30 100644
--- a/README.md
+++ b/README.md
@@ -53,7 +53,7 @@ In order to run a whole pipeline run the following command: `./run_all.sh folder
 
 It may take <ins> very long time </ins> depending on your machine and the number of files and if you change the parameter if points density (by default its 150) to some lower number. The lower the number the bigger pointclounds are to be precessed and more time it may take. Keep in mind that at some point (for too low of the number) the pipeline may break. 
 
-The default model which is available in the repo in `fsct\model\model.path` was trained on the nibio data with <ins> 10 cm sampling </ins> the val accuracy was approx. 0.92.
+The default model which is available in the repo in `fsct\model\model.path` was trained on the nibio data with <ins> 1 cm sampling (0.01m) </ins> the val accuracy was approx. 0.92.
 
 Make sure that you put the data in `*.las` format to this folder. If your files are in a different format e.g. `*.laz` you can use `python nibio_preprocessing/convert_files_in_folder.py --input_folder input_folder_name --output_folder output_folder las ` to convert your file to `*.las` format. 
 
-- 
GitLab