Refactor translation function and update reports to use new area unit preference

- Renamed translation function from `t` to `tr_key` for clarity and consistency.
- Updated all instances of translation calls in `90_CI_report_with_kpis_agronomic_support.Rmd` and `91_CI_report_with_kpis_cane_supply.Rmd` to use `tr_key`.
- Introduced a new helper function `get_area_unit_label` to manage area unit preferences across the project.
- Modified area calculations in `91_CI_report_with_kpis_cane_supply.Rmd` to utilize area from analysis data instead of recalculating.
- Added area unit preference setting in `parameters_project.R` to allow for flexible reporting in either hectares or acres.
- Updated `MANUAL_PIPELINE_RUNNER.R` to include language parameter for report generation.
- Adjusted translations in the `translations.xlsx` file to reflect changes in the report structure.
This commit is contained in:
Timon 2026-02-24 12:16:44 +01:00
parent 9afceea121
commit b487cc983f
9 changed files with 649 additions and 282 deletions

View file

@ -871,6 +871,112 @@ python python_app/23_convert_harvest_format.py angata
---
### OPTIONAL 5: Area Unit Selection UI (Hectares vs Acres)
**Why**: Allow projects to choose their preferred area unit (hectares or acres) in reports and dashboards
**Status**: R/Python business logic complete. Database schema + UI implementation pending.
**What's already done** (in this codebase):
- ✅ Unified area calculation function in `80_utils_common.R`: `calculate_area_from_geometry()`
- ✅ Area unit preference in `parameters_project.R`: `AREA_UNIT_PREFERENCE` (default: "hectare")
- ✅ Helper function: `get_area_unit_label()` for dynamic "ha" or "ac" display
- ✅ Refactored scripts 80/90/91 to use unified function and support user's area preference
- ✅ Area now included in KPI outputs (CSV/RDS/Excel) from script 80
- ✅ Scripts 90/91 read area from KPI files instead of recalculating
**What needs implementation (for your Laravel colleague)**:
**Step 1: Create database migration**
```php
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
public function up(): void
{
Schema::table('projects', function (Blueprint $table) {
$table->enum('preferred_area_unit', ['hectare', 'acre'])
->default('hectare')
->after('client_type');
});
}
public function down(): void
{
Schema::table('projects', function (Blueprint $table) {
$table->dropColumn('preferred_area_unit');
});
}
};
```
**Step 2: Update Project model** (`laravel_app/app/Models/Project.php`)
```php
protected $fillable = [
// ... existing fields ...
'preferred_area_unit', // ADD THIS
];
```
**Step 3: Add form UI** (`laravel_app/app/Livewire/Projects/ProjectManager.php` or Blade template)
```blade
<div class="form-group mb-3">
<label for="preferred_area_unit" class="form-label">Area Unit Preference <span class="text-danger">*</span></label>
<div class="btn-group btn-group-sm" role="group">
<input type="radio" class="btn-check" name="preferred_area_unit"
id="unit_hectare" value="hectare"
wire:model="formData.preferred_area_unit" />
<label class="btn btn-outline-primary" for="unit_hectare">Hectares (ha)</label>
<input type="radio" class="btn-check" name="preferred_area_unit"
id="unit_acre" value="acre"
wire:model="formData.preferred_area_unit" />
<label class="btn btn-outline-primary" for="unit_acre">Acres (ac)</label>
</div>
@error('formData.preferred_area_unit')
<span class="text-danger small">{{ $message }}</span>
@enderror
</div>
```
**Step 4: Pass preference to R scripts** (in job/shell wrapper)
When launching R scripts, read `project->preferred_area_unit` and either:
- **Option A**: Write to `parameters_project.R` dynamically before script execution
- **Option B**: Pass as environment variable to scripts (R scripts read `Sys.getenv("AREA_UNIT")`)
Example (PowerShell wrapper):
```powershell
$areaUnit = $project->preferred_area_unit # From database
$env:AREA_UNIT = $areaUnit # Set environment variable
& "C:\Program Files\R\R-4.4.3\bin\x64\Rscript.exe" r_app/80_calculate_kpis.R $project
```
**Testing checklist**:
- [ ] Database migration runs successfully
- [ ] Project form shows area unit radio buttons/dropdown
- [ ] Can select and save area unit preference
- [ ] Area unit persists in database
- [ ] Run script 80 with one project set to "hectare", observe KPI output
- [ ] Run script 80 with another project set to "acre", compare outputs
- [ ] Reports (scripts 90/91) display area in user's chosen unit
**Notes**:
- Default preference: "hectare" (metric standard)
- Conversion factor used: 0.404686 (1 hectare = 0.404686 acres)
- All area calculations use EPSG:6933 (equal-area projection) for accuracy
- Area column in KPI outputs named dynamically: "Area_ha" or "Area_ac"
---
## 📊 Summary: What Gets Changed
| Category | Files Modified | Changes Required |

View file

@ -352,6 +352,14 @@ calculate_tch_forecasted_kpi <- function(field_statistics, harvesting_data = NUL
#' @param ci_values_list List of CI values for each field (multiple weeks)
#'
#' @return Data frame with field-level decline indicators
#' @details
#' Uses FOUR_WEEK_TREND_* thresholds defined in 80_utils_common.R:
#' - FOUR_WEEK_TREND_STRONG_GROWTH_MIN (0.3)
#' - FOUR_WEEK_TREND_GROWTH_MIN (0.1)
#' - FOUR_WEEK_TREND_STABLE_THRESHOLD (0.1 and -0.1)
#' - FOUR_WEEK_TREND_WEAK_DECLINE_THRESHOLD (-0.3)
#' - FOUR_WEEK_TREND_STRONG_DECLINE_MAX (-0.3)
#'
calculate_growth_decline_kpi <- function(ci_values_list) {
result <- data.frame(
field_idx = seq_len(length(ci_values_list)),
@ -382,23 +390,18 @@ calculate_growth_decline_kpi <- function(ci_values_list) {
result$four_week_trend[field_idx] <- round(as.numeric(slope), 3)
# Categorize trend using consistent thresholds (note: must use global constants if available)
# Category ranges:
# slope >= 0.5: Strong growth (↑↑)
# 0.1 <= slope < 0.5: Weak growth (↑)
# -0.1 <= slope < 0.1: Stable (→)
# -0.3 < slope < -0.1: Weak decline (↓)
# slope <= -0.3: Strong decline (↓↓)
if (slope >= 0.5) {
# Categorize trend using shared constants from 80_utils_common.R
if (slope >= FOUR_WEEK_TREND_STRONG_GROWTH_MIN) {
result$trend_interpretation[field_idx] <- "Strong growth"
result$decline_severity[field_idx] <- "None"
} else if (slope >= 0.1) {
} else if (slope >= FOUR_WEEK_TREND_GROWTH_MIN) {
result$trend_interpretation[field_idx] <- "Weak growth"
result$decline_severity[field_idx] <- "None"
} else if (slope >= -0.1) {
} else if (slope >= -FOUR_WEEK_TREND_STABLE_THRESHOLD) {
result$trend_interpretation[field_idx] <- "Stable"
result$decline_severity[field_idx] <- "None"
} else if (slope > -0.3) {
} else if (slope > FOUR_WEEK_TREND_WEAK_DECLINE_THRESHOLD) {
result$trend_interpretation[field_idx] <- "Weak decline"
result$decline_severity[field_idx] <- "Low"
} else {
@ -626,6 +629,29 @@ create_field_detail_table <- function(field_boundaries_sf, all_kpis, current_wee
result$Mean_CI <- NA_real_
}
# ============================================
# GROUP 0b: FIELD AREA (from geometry)
# ============================================
# Calculate field area using unified function and preferred unit
tryCatch({
field_areas <- calculate_area_from_geometry(field_boundaries_sf, unit = AREA_UNIT_PREFERENCE)
unit_label <- get_area_unit_label(AREA_UNIT_PREFERENCE)
area_col_name <- paste0("Area_", unit_label)
area_df <- data.frame(
field_idx = seq_along(field_areas),
area_value = field_areas,
stringsAsFactors = FALSE
)
colnames(area_df) <- c("field_idx", area_col_name)
result <- result %>%
left_join(area_df, by = "field_idx")
}, error = function(e) {
message(paste("Warning: Could not calculate field areas:", e$message))
result[[paste0("Area_", get_area_unit_label(AREA_UNIT_PREFERENCE))]] <<- NA_real_
})
# ============================================
# GROUP 1: FIELD UNIFORMITY (KPI 1)
# ============================================

View file

@ -48,47 +48,48 @@ CI_CHANGE_INCREASE_THRESHOLD <- CI_CHANGE_RAPID_GROWTH_THRESHOLD # Weekly CI
#' Calculate acreage for each field from geometry
#' @param field_boundaries_sf sf object with field geometries
#' @return data.frame with field and acreage columns
calculate_field_acreages <- function(field_boundaries_sf) {
#' @param unit Character. Unit preference: "hectare" or "acre" (default from AREA_UNIT_PREFERENCE)
#' @return data.frame with field and area columns (column name reflects unit: Area_ha or Area_ac)
calculate_field_acreages <- function(field_boundaries_sf, unit = AREA_UNIT_PREFERENCE) {
tryCatch({
# Project to equal-area CRS (EPSG:6933) for accurate area calculations
field_boundaries_proj <- sf::st_transform(field_boundaries_sf, "EPSG:6933")
lookup_df <- field_boundaries_proj %>%
# Get field identifier (handles pivot.geojson structure)
field_names <- field_boundaries_sf %>%
sf::st_drop_geometry() %>%
as.data.frame() %>%
mutate(
geometry_valid = sapply(seq_len(nrow(field_boundaries_proj)), function(idx) {
tryCatch({
sf::st_is_valid(field_boundaries_proj[idx, ])
}, error = function(e) FALSE)
}),
area_ha = 0
)
pull(any_of(c("field", "field_id", "Field_id", "name", "Name")))
# Calculate area for valid geometries
valid_indices <- which(lookup_df$geometry_valid)
areas_ha <- vapply(valid_indices, function(idx) {
tryCatch({
area_m2 <- as.numeric(sf::st_area(field_boundaries_proj[idx, ]))
area_m2 / 10000
}, error = function(e) NA_real_)
}, numeric(1))
lookup_df$area_ha[valid_indices] <- areas_ha
if (length(field_names) == 0 || all(is.na(field_names))) {
field_names <- seq_len(nrow(field_boundaries_sf))
}
# Convert hectares to acres
lookup_df %>%
mutate(acreage = area_ha / 0.404686) %>%
# Aggregate by field to handle multi-row fields (e.g., sub_fields)
# Use unified area calculation function
areas <- calculate_area_from_geometry(field_boundaries_sf, unit = unit)
# Create output data frame with unit-aware column name
unit_label <- get_area_unit_label(unit)
col_name <- paste0("Area_", unit_label)
result_df <- data.frame(
field = field_names,
area = areas,
stringsAsFactors = FALSE
)
colnames(result_df) <- c("field", col_name)
# Aggregate by field to handle multi-row fields (e.g., sub_fields)
result_df %>%
group_by(field) %>%
summarise(acreage = sum(acreage, na.rm = TRUE), .groups = "drop") %>%
select(field, acreage)
summarise(across(all_of(col_name), list(~ sum(., na.rm = TRUE))), .groups = "drop")
}, error = function(e) {
message(paste("Warning: Could not calculate acreages from geometries -", e$message))
data.frame(field = character(0), acreage = numeric(0))
message(paste("Warning: Could not calculate areas from geometries -", e$message))
unit_label <- get_area_unit_label(unit)
col_name <- paste0("Area_", unit_label)
result_df <- data.frame(field = character(0))
result_df[[col_name]] <- numeric(0)
return(result_df)
})
}
#' Calculate age in weeks from planting date
#'
#' @param planting_date Date of planting
@ -219,8 +220,12 @@ calculate_all_field_kpis <- function(current_stats,
message("\nBuilding final field analysis output...")
# Pre-calculate acreages
acreage_lookup <- calculate_field_acreages(field_boundaries_sf)
# Pre-calculate areas using unified function
acreage_lookup <- calculate_field_acreages(field_boundaries_sf, unit = AREA_UNIT_PREFERENCE)
# Determine area column name from result
unit_label <- get_area_unit_label(AREA_UNIT_PREFERENCE)
area_col_name <- paste0("Area_", unit_label)
field_analysis_df <- current_stats %>%
mutate(
@ -230,9 +235,9 @@ calculate_all_field_kpis <- function(current_stats,
# Column 3: Field_name (from GeoJSON)
Field_name = Field_id,
# Column 4: Acreage (from geometry)
# Column 4: Acreage (from geometry, unit-aware)
Acreage = {
acreages_vec <- acreage_lookup$acreage[match(Field_id, acreage_lookup$field)]
acreages_vec <- acreage_lookup[[area_col_name]][match(Field_id, acreage_lookup$field)]
if_else(is.na(acreages_vec), 0, acreages_vec)
},

View file

@ -40,14 +40,23 @@ if (!exists("PROJECT", envir = .GlobalEnv)) {
# CONSTANTS (from 80_calculate_kpis.R)
# ============================================================================
# Four-week trend thresholds
FOUR_WEEK_TREND_STRONG_GROWTH_MIN <- 0.5
# Four-week trend thresholds (CI units/week) - SYMMETRIC by design
# Report mapping:
# Strong growth (↑↑): Slope ≥ 0.3
# Weak growth (↑): Slope 0.10.3
# Stable (→): Slope 0.1 to +0.1
# Weak Decline (↓): Slope 0.3 to 0.1
# Strong Decline (↓↓): Slope < 0.3
FOUR_WEEK_TREND_STRONG_GROWTH_MIN <- 0.3
FOUR_WEEK_TREND_GROWTH_MIN <- 0.1
FOUR_WEEK_TREND_GROWTH_MAX <- 0.5
FOUR_WEEK_TREND_NO_GROWTH_RANGE <- 0.1
FOUR_WEEK_TREND_DECLINE_MAX <- -0.1
FOUR_WEEK_TREND_DECLINE_MIN <- -0.5
FOUR_WEEK_TREND_STRONG_DECLINE_MAX <- -0.5
FOUR_WEEK_TREND_STABLE_THRESHOLD <- 0.1
FOUR_WEEK_TREND_WEAK_DECLINE_THRESHOLD <- -0.1 # upper bound of Weak Decline / lower bound of Stable
FOUR_WEEK_TREND_STRONG_DECLINE_MAX <- -0.3
# ============================================================================
# AREA CALCULATION UNITS & CONVERSION
# ============================================================================
# Conversion constant: 1 hectare = 0.404686 acres (exact: 0.40468564224...)
HECTARE_TO_ACRE_CONVERSION <- 0.404686
# CV Trend thresholds (8-week slope interpretation)
CV_SLOPE_STRONG_IMPROVEMENT_MIN <- -0.03
@ -71,6 +80,85 @@ PHASE_DEFINITIONS <- data.frame(
stringsAsFactors = FALSE
)
# ============================================================================
# AREA CALCULATION FUNCTIONS (Unified across all scripts)
# ============================================================================
#' Calculate field area from geometry in specified unit
#'
#' Unified function for calculating polygon area from sf or SpatVect geometries.
#' Uses equal-area projection (EPSG:6933) for accurate calculations across all zones.
#'
#' @param geometry sf or SpatVect object containing field polygons
#' If sf: must have geometry column (auto-detected)
#' If SpatVect: terra object with geometry
#' @param unit Character. Output unit: "hectare" (default) or "acre"
#'
#' @return Numeric vector of areas in specified unit
#'
#' @details
#' **Projection Logic**:
#' - Input geometries are reprojected to EPSG:6933 (Equal Earth projection)
#' - This ensures accurate area calculations regardless of original CRS
#' - Equal-area projections are essential for agricultural analysis
#'
#' **Unit Conversion**:
#' - m² → hectares: divide by 10,000
#' - hectares → acres: multiply by 2.4711 (or divide by 0.404686)
#' - Direct m² → acres: divide by 4046.8564
#'
#' **Handling Multiple Geometries**:
#' - If `geometry` has multiple rows/features, returns vector of areas (one per feature)
#' - NA values are preserved and propagated to output
#'
#' @examples
#' # With sf object
#' library(sf)
#' fields_sf <- st_read("pivot.geojson")
#' areas_ha <- calculate_area_from_geometry(fields_sf, unit = "hectare")
#' areas_ac <- calculate_area_from_geometry(fields_sf, unit = "acre")
#'
#' # With SpatVect
#' library(terra)
#' fields_vect <- vect("pivot.geojson")
#' areas_ha <- calculate_area_from_geometry(fields_vect, unit = "hectare")
#'
#' @export
calculate_area_from_geometry <- function(geometry, unit = "hectare") {
# Validate unit parameter
unit_lower <- tolower(unit)
if (!unit_lower %in% c("hectare", "acre")) {
stop("Unit must be 'hectare' or 'acre'. Got: ", unit)
}
tryCatch({
# Branch by geometry type
if (inherits(geometry, "sf")) {
# Handle sf object
geometry_proj <- sf::st_transform(geometry, 6933)
areas_m2 <- as.numeric(sf::st_area(geometry_proj))
} else if (inherits(geometry, "SpatVect")) {
# Handle terra SpatVect object
geometry_proj <- terra::project(geometry, "EPSG:6933")
areas_m2 <- as.numeric(terra::expanse(geometry_proj))
} else {
stop("geometry must be an sf or terra SpatVect object. Got: ",
paste(class(geometry), collapse = ", "))
}
# Convert units
areas_ha <- areas_m2 / 10000 # m² → hectares
if (unit_lower == "hectare") {
return(areas_ha)
} else { # unit_lower == "acre"
return(areas_ha / HECTARE_TO_ACRE_CONVERSION)
}
}, error = function(e) {
stop("Error calculating area from geometry: ", e$message)
})
}
# ============================================================================
# WEEK/YEAR CALCULATION HELPERS (Consistent across all scripts)
# ============================================================================
@ -97,10 +185,12 @@ calculate_target_week_and_year <- function(current_week, current_year, offset_we
target_week <- current_week - offset_weeks
target_year <- current_year
# Handle wrapping: when going back from week 1, wrap to week 52 of previous year
# Handle wrapping: when going back from week 1, wrap to last ISO week of previous year
# Compute last_week_of_year dynamically (some years have 53 weeks)
while (target_week < 1) {
target_week <- target_week + 52
target_year <- target_year - 1
last_week_of_year <- as.numeric(format(as.Date(paste0(target_year, "-12-28")), "%V"))
target_week <- target_week + last_week_of_year
}
return(list(week = target_week, year = target_year))
@ -262,11 +352,11 @@ categorize_four_week_trend <- function(ci_values_list) {
if (avg_weekly_change >= FOUR_WEEK_TREND_STRONG_GROWTH_MIN) {
return("strong growth")
} else if (avg_weekly_change >= FOUR_WEEK_TREND_GROWTH_MIN &&
avg_weekly_change < FOUR_WEEK_TREND_GROWTH_MAX) {
avg_weekly_change < FOUR_WEEK_TREND_STRONG_GROWTH_MIN) {
return("growth")
} else if (abs(avg_weekly_change) <= FOUR_WEEK_TREND_NO_GROWTH_RANGE) {
} else if (abs(avg_weekly_change) <= FOUR_WEEK_TREND_STABLE_THRESHOLD) {
return("no growth")
} else if (avg_weekly_change <= FOUR_WEEK_TREND_DECLINE_MIN &&
} else if (avg_weekly_change <= FOUR_WEEK_TREND_WEAK_DECLINE_THRESHOLD &&
avg_weekly_change > FOUR_WEEK_TREND_STRONG_DECLINE_MAX) {
return("decline")
} else if (avg_weekly_change < FOUR_WEEK_TREND_STRONG_DECLINE_MAX) {
@ -977,6 +1067,28 @@ calculate_field_statistics <- function(field_boundaries_sf, week_num, year,
next
}
# Guard: detect cloud-masked dates (CI == 0 indicates no-data)
# When any extracted value is 0, treat the entire date as cloud-masked
has_zeros <- any(extracted$CI == 0, na.rm = TRUE)
if (has_zeros) {
# Cloud-masked date: skip temporal analysis, set stats to NA
message(paste(" [CLOUD] Field", field_name, "- entire date is cloud-masked (CI==0)"))
results_list[[length(results_list) + 1]] <- data.frame(
Field_id = field_name,
Mean_CI = NA_real_,
CV = NA_real_,
CI_range = NA_character_,
CI_Percentiles = NA_character_,
Pct_pixels_CI_gte_2 = NA_real_,
Cloud_pct_clear = 0,
Cloud_category = "No image available",
stringsAsFactors = FALSE
)
next
}
ci_vals <- extracted$CI[!is.na(extracted$CI)]
if (length(ci_vals) == 0) {
@ -1139,6 +1251,12 @@ calculate_kpi_trends <- function(current_stats, prev_stats = NULL,
prev_field_analysis <- readr::read_csv(recent_file, show_col_types = FALSE,
col_types = readr::cols(.default = readr::col_character()),
col_select = c(Field_id, nmr_of_weeks_analysed, Phase))
# Convert nmr_of_weeks_analysed from character to integer (read as character via .default)
# Handle NAs appropriately during conversion
if (!is.null(prev_field_analysis) && "nmr_of_weeks_analysed" %in% names(prev_field_analysis)) {
prev_field_analysis$nmr_of_weeks_analysed <- suppressWarnings(as.integer(prev_field_analysis$nmr_of_weeks_analysed))
}
}
}
}, error = function(e) {
@ -1286,7 +1404,7 @@ calculate_kpi_trends <- function(current_stats, prev_stats = NULL,
prev_analysis_row <- prev_field_analysis %>% dplyr::filter(Field_id == field_id)
if (nrow(prev_analysis_row) > 0) {
prev_nmr_weeks_analysis <- prev_analysis_row$nmr_of_weeks_analysed[1]
prev_nmr_weeks_analysis <- as.integer(prev_analysis_row$nmr_of_weeks_analysed[1])
if (!is.na(prev_nmr_weeks_analysis)) {
current_stats$nmr_of_weeks_analysed[i] <- prev_nmr_weeks_analysis + 1L
} else {
@ -1402,7 +1520,9 @@ extract_ci_values <- function(ci_raster, field_vect) {
} else if (ncol(extracted) > 1) {
return(extracted[, ncol(extracted)])
} else {
return(extracted[, 1])
# Degenerate case: extracted has only ID column, no CI values
# Return NA vector of appropriate length instead of the ID column
return(rep(NA_real_, nrow(extracted)))
}
}
@ -1415,8 +1535,9 @@ calculate_week_numbers <- function(report_date = Sys.Date()) {
previous_year <- current_year
if (previous_week < 1) {
previous_week <- 52
# Compute last ISO week of previous year dynamically (some years have 53 weeks)
previous_year <- current_year - 1
previous_week <- as.numeric(format(as.Date(paste0(previous_year, "-12-28")), "%V"))
}
return(list(

View file

@ -499,37 +499,88 @@ tryCatch({
})
# Helper function to handle missing translation keys
t <- function(key) {
tr_key <- function(key) {
if (key %in% names(tr)) {
txt <- glue(tr[key], .envir = parent.frame())
txt <- gsub("\n", " \n", txt)
return(enc2utf8(as.character(txt)))
} else if (is.na(key)) {
return(t("NA"))
return(tr_key("NA"))
} else if (key == "") {
return("")
} else {
return(paste0(key))
}
}
# ============================================================================
# SHARED TREND MAPPING HELPER
# ============================================================================
# Canonical function for converting trend text to arrows/formatted text
# Normalizes all legacy and current trend category names to standardized output
# Used by: combined_kpi_table, field_details_table, and compact_field_display chunks
map_trend_to_arrow <- function(text_vec, include_text = FALSE) {
# Normalize: convert to character and lowercase
text_lower <- tolower(as.character(text_vec))
# Apply mapping to each element
sapply(text_lower, function(text) {
# Handle NA and empty values
if (is.na(text) || text == "" || nchar(trimws(text)) == 0) {
return(NA_character_)
}
# Determine category and build output with translated labels
if (grepl("strong growth", text)) {
arrow <- "↑↑"
trans_key <- "Strong growth"
} else if (grepl("slight growth|weak growth|growth|increasing", text)) {
arrow <- "↑"
trans_key <- "Slight growth"
} else if (grepl("stable|no growth", text)) {
arrow <- "→"
trans_key <- "Stable"
} else if (grepl("weak decline|slight decline|moderate decline", text)) {
arrow <- "↓"
trans_key <- "Slight decline"
} else if (grepl("strong decline|severe", text)) {
arrow <- "↓↓"
trans_key <- "Strong decline"
} else {
# Fallback: return "—" (em-dash) for arrow-only mode, raw text for text mode
# This signals an unmatched trend value that should be logged
return(if (include_text) as.character(text) else "—")
}
# Get translated label using tr_key()
label <- tr_key(trans_key)
# Return formatted output based on include_text flag
if (include_text) {
paste0(label, " (", arrow, ")")
} else {
arrow
}
}, USE.NAMES = FALSE)
}
```
<!-- Dynamic cover page -->
::: {custom-style="Cover_title" style="text-align:center; margin-top:120px;"}
<span style="font-size:100pt; line-height:1.0; font-weight:700;">`r t("cover_title")`</span>
<span style="font-size:100pt; line-height:1.0; font-weight:700;">`r tr_key("cover_title")`</span>
:::
::: {custom-style="Cover_subtitle" style="text-align:center; margin-top:18px;"}
<span style="font-size:20pt; font-weight:600;">`r t("cover_subtitle")`</span>
<span style="font-size:20pt; font-weight:600;">`r tr_key("cover_subtitle")`</span>
:::
\newpage
`r t("report_summary")`
`r tr_key("report_summary")`
`r t("report_structure")`
`r tr_key("report_structure")`
`r t("key_insights")`
`r tr_key("key_insights")`
```{r key_insights, echo=FALSE, results='asis'}
# Calculate key insights from KPI data
@ -539,7 +590,7 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
# 1. Uniformity insights - group by interpretation
if (!is.null(summary_tables$uniformity) && nrow(summary_tables$uniformity) > 0) {
cat("\n", t("field_unif"))
cat("\n", tr_key("field_unif"))
uniformity_counts <- summary_tables$uniformity %>%
dplyr::select(interpretation, count = field_count)
@ -547,14 +598,14 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
status <- uniformity_counts$interpretation[i]
count <- uniformity_counts$count[i]
if (!is.na(status) && !is.na(count) && count > 0) {
cat(" -", t("unif_status"))
cat(" -", tr_key("unif_status"))
}
}
}
# 2. Area change insights - group by interpretation
if (!is.null(summary_tables$area_change) && nrow(summary_tables$area_change) > 0) {
cat("\n\n", t("field_area"))
cat("\n\n", tr_key("field_area"))
area_counts <- summary_tables$area_change %>%
dplyr::select(interpretation, count = field_count)
@ -562,14 +613,14 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
status <- area_counts$interpretation[i]
count <- area_counts$count[i]
if (!is.na(status) && !is.na(count) && count > 0) {
cat(" -", t("area_status"))
cat(" -", tr_key("area_status"))
}
}
}
# 3. Growth trend insights - group by trend_interpretation
if (!is.null(summary_tables$growth_decline) && nrow(summary_tables$growth_decline) > 0) {
cat("\n\n", t("growth_trend"))
cat("\n\n", tr_key("growth_trend"))
growth_counts <- summary_tables$growth_decline %>%
dplyr::select(trend = trend_interpretation, count = field_count)
@ -577,14 +628,14 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
trend <- growth_counts$trend[i]
count <- growth_counts$count[i]
if (!is.na(trend) && !is.na(count) && count > 0) {
cat(" -", t("trend_status"))
cat(" -", tr_key("trend_status"))
}
}
}
# 4. Patchiness insights - group by patchiness_risk
if (!is.null(summary_tables$patchiness) && nrow(summary_tables$patchiness) > 0) {
cat("\n\n", t("patch_risk"))
cat("\n\n", tr_key("patch_risk"))
patchiness_counts <- summary_tables$patchiness %>%
dplyr::select(patchiness_risk, count = field_count)
@ -592,25 +643,32 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
risk <- patchiness_counts$patchiness_risk[i]
count <- patchiness_counts$count[i]
if (!is.na(risk) && !is.na(count) && count > 0) {
cat(" -", t("patch_status"))
cat(" -", tr_key("patch_status"))
}
}
}
# 5. Total fields analyzed
total_fields <- sum(summary_tables$uniformity$field_count, na.rm = TRUE)
cat("\n\n", t("tot_fields_analyzed"))
if (!is.null(summary_tables$uniformity) && "field_count" %in% names(summary_tables$uniformity)) {
total_fields <- sum(summary_tables$uniformity$field_count, na.rm = TRUE)
} else {
total_fields <- NA_integer_
}
if (!is.na(total_fields)) {
cat("\n\n", tr_key("tot_fields_analyzed"))
}
} else {
cat(t("kpi_na"))
cat(tr_key("kpi_na"))
}
```
\newpage
`r t("section_i")`
`r tr_key("section_i")`
`r t("exec_summary")`\n\n
`r tr_key("exec_summary")`\n\n
```{r combined_kpi_table, echo=FALSE, results='asis'}
# Display KPI tables - standardized format with Level, Count, Percent columns
@ -641,40 +699,9 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
display_level <- df[[level_col]]
}
# Helper function to convert trend interpretation to text + arrow format
# Works on vectors of text - handles both old and new category names
add_trend_arrows <- function(text_vec) {
# Handle NA and empty values
text_lower <- tolower(as.character(text_vec))
# Use sapply to apply mapping logic to each element
sapply(text_lower, function(text) {
if (is.na(text) || text == "") return(NA_character_)
# Map trend categories to text with arrows for KPI table
# Handles both OLD names (moderate/slight decline) and NEW names (weak/strong)
if (grepl("strong growth", text)) {
"Strong Growth (↑↑)"
} else if (grepl("weak growth", text)) {
"Weak Growth (↑)"
} else if (grepl("stable|no growth", text)) {
"Stable (→)"
} else if (grepl("weak decline", text)) {
"Weak Decline (↓)"
} else if (grepl("slight decline|moderate decline", text)) {
# Map old category names to new arrow format
"Weak Decline (↓)"
} else if (grepl("strong decline", text)) {
"Strong Decline (↓↓)"
} else {
as.character(text)
}
}, USE.NAMES = FALSE)
}
df %>%
dplyr::transmute(
Level = if (level_col == "trend_interpretation") add_trend_arrows(display_level) else as.character(display_level),
Level = if (level_col == "trend_interpretation") map_trend_to_arrow(display_level, include_text = TRUE) else as.character(display_level),
Count = as.integer(round(as.numeric(.data[[count_col]]))),
Percent = if (is.na(total)) {
NA_real_
@ -720,11 +747,11 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
dplyr::select(KPI = KPI_display, Level, Count, Percent)
# Translate the table for visualization
names(display_df) <- c(t("KPI"), t("Level"), t("Count"), t("Percent"))
names(display_df) <- c(tr_key("KPI"), tr_key("Level"), tr_key("Count"), tr_key("Percent"))
display_df[, 1:2] <- lapply(display_df[, 1:2], function(col) sapply(col, t))
ft <- flextable(display_df) %>%
merge_v(j = t("KPI")) %>%
merge_v(j = tr_key("KPI")) %>%
autofit()
cum_rows <- cumsum(kpi_group_sizes)
@ -737,12 +764,12 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
ft
} else {
cat(t("no_kpi_table"))
cat(tr_key("no_kpi_table"))
}
}, error = function(e) {
safe_log(paste("Error displaying KPI tables:", e$message), "WARNING")
cat(t("kpi_table_error"))
cat(tr_key("kpi_table_error"))
})
} else {
@ -751,7 +778,7 @@ if (exists("summary_tables") && !is.null(summary_tables) && length(summary_table
```
\newpage
`r t("field_alerts")`
`r tr_key("field_alerts")`
```{r field_alerts_table, echo=FALSE, results='asis'}
# Generate alerts for all fields
@ -815,21 +842,21 @@ generate_field_alerts <- function(field_details_table) {
# Generate alerts based on priority level
if (priority_level == 1) {
field_alerts <- c(field_alerts, t("priority"))
field_alerts <- c(field_alerts, tr_key("priority"))
} else if (priority_level == 2) {
field_alerts <- c(field_alerts, t("monitor"))
field_alerts <- c(field_alerts, tr_key("monitor"))
}
# Priority 3: No alert (no stress)
# Keep other alerts for decline risk, patchiness risk, gap score
if (field_summary$highest_decline_risk %in% c("High", "Very-high")) {
field_alerts <- c(field_alerts, t("growth_decline"))
field_alerts <- c(field_alerts, tr_key("growth_decline"))
}
if (field_summary$highest_patchiness_risk == "High") {
field_alerts <- c(field_alerts, t("high_patchiness"))
field_alerts <- c(field_alerts, tr_key("high_patchiness"))
}
if (field_summary$max_gap_score > 20) {
field_alerts <- c(field_alerts, t("gaps_present"))
field_alerts <- c(field_alerts, tr_key("gaps_present"))
}
# Only add alerts if there are any (skip fields with no alerts)
@ -898,7 +925,7 @@ if (exists("field_details_table") && !is.null(field_details_table) && nrow(field
autofit()
ft
} else {
cat(t("alerts_na"))
cat(tr_key("alerts_na"))
}
} else {
cat("Note: Field details data not available for alerts generation. Run 80_calculate_kpis.R to generate KPI data.\n")
@ -956,18 +983,26 @@ if (!exists("field_details_table") || is.null(field_details_table)) {
# Get field names from geometries
field_names <- AllPivots0$field
# Try to calculate field sizes (area) from geometry if available
# Calculate field sizes (area) from geometry using unified function
field_sizes <- if (!is.null(sf::st_geometry(AllPivots0)) && !all(sf::st_is_empty(sf::st_geometry(AllPivots0)))) {
sf::st_area(sf::st_geometry(AllPivots0)) / 4046.86 # Convert m² to acres (1 acre = 4046.86 m²)
tryCatch({
calculate_area_from_geometry(AllPivots0, unit = AREA_UNIT_PREFERENCE)
}, error = function(e) {
safe_log(paste("Warning: Could not calculate area from geometry:", e$message), "WARNING")
rep(NA_real_, length(field_names))
})
} else {
rep(NA_real_, length(field_names))
}
# Create minimal field details table with actual data we have + NAs for missing KPI columns
# IMPORTANT: Use column names that match downstream code expectations (no spaces, match exact names)
unit_label <- get_area_unit_label(AREA_UNIT_PREFERENCE)
area_col_name <- paste0("Area_", unit_label)
field_details_table <- tibble::tibble(
Field_id = field_names,
Acreage = as.numeric(field_sizes),
!! area_col_name := as.numeric(field_sizes),
Growth_Uniformity = NA_character_,
TCH_Forecasted = NA_real_,
Gap_Score = NA_real_,
@ -1145,7 +1180,7 @@ tryCatch({
# Choose color palette based on colorblind_friendly parameter
if (colorblind_friendly) {
fill_scale <- ggplot2::scale_fill_viridis_c(
name = t("legend_ci"),
name = tr_key("legend_ci"),
limits = c(1, 8),
direction = 1,
na.value = "transparent",
@ -1155,7 +1190,7 @@ tryCatch({
# Use Red-Yellow-Green diverging palette (reversed for intuitive interpretation)
fill_scale <- ggplot2::scale_fill_distiller(
palette = "RdYlGn",
name = t("legend_ci"),
name = tr_key("legend_ci"),
limits = c(1, 8),
direction = 1, # Standard direction for RdYlGn
na.value = "transparent"
@ -1219,7 +1254,7 @@ tryCatch({
panel.background = ggplot2::element_rect(fill = "white", color = NA)
) +
ggplot2::labs(
title = t("ci_overview_title")
title = tr_key("ci_overview_title")
)
# Print the map
@ -1252,7 +1287,7 @@ tryCatch({
if (colorblind_friendly) {
# Use plasma for colorblind-friendly diverging visualization
fill_scale <- ggplot2::scale_fill_viridis_c(
name = t("legend_ci_change"),
name = tr_key("legend_ci_change"),
option = "plasma",
limits = c(-3, 3),
na.value = "transparent",
@ -1262,7 +1297,7 @@ tryCatch({
# Use Red-Blue diverging palette (red=decline, blue=increase)
fill_scale <- ggplot2::scale_fill_distiller(
palette = "RdBu",
name = t("legend_ci_change"),
name = tr_key("legend_ci_change"),
limits = c(-3, 3),
direction = 1,
na.value = "transparent"
@ -1326,7 +1361,7 @@ tryCatch({
panel.background = ggplot2::element_rect(fill = "white", color = NA)
) +
ggplot2::labs(
title = t("ci_change_title")
title = tr_key("ci_change_title")
)
# Print the map
@ -1347,7 +1382,7 @@ tryCatch({
\newpage
`r t("section_ii")`
`r tr_key("section_ii")`
\newpage
@ -1486,35 +1521,36 @@ tryCatch({
if (nrow(field_kpi) > 0) {
# Format KPIs as compact single line (no interpretations, just values)
kpi_parts <- c(
sprintf("**%s:** %.2f", t("cv_value"), field_kpi$CV),
sprintf("**%s:** %.2f", t("mean_ci"), field_kpi$Mean_CI)
sprintf("**%s:** %.2f", tr_key("cv_value"), field_kpi$CV),
sprintf("**%s:** %.2f", tr_key("mean_ci"), field_kpi$Mean_CI)
)
# Add Weekly_CI_Change if available (note: capital C and I)
if (!is.null(field_kpi$Weekly_CI_Change) && !is.na(field_kpi$Weekly_CI_Change)) {
change_sign <- ifelse(field_kpi$Weekly_CI_Change >= 0, "+", "")
kpi_parts <- c(kpi_parts, sprintf("**Δ%s:** %s%.2f", t("CI"), change_sign, field_kpi$Weekly_CI_Change))
kpi_parts <- c(kpi_parts, sprintf("**Δ%s:** %s%.2f", tr_key("CI"), change_sign, field_kpi$Weekly_CI_Change))
}
# Compact trend display with symbols (arrows only)
trend_compact <- case_when(
grepl("Strong growth", field_kpi$Trend_Interpretation, ignore.case = TRUE) ~ "↑↑",
grepl("Weak growth|Growth|Increasing", field_kpi$Trend_Interpretation, ignore.case = TRUE) ~ "↑",
grepl("Stable|No growth", field_kpi$Trend_Interpretation, ignore.case = TRUE) ~ "→",
grepl("Weak decline", field_kpi$Trend_Interpretation, ignore.case = TRUE) ~ "↓",
grepl("Strong decline|Severe", field_kpi$Trend_Interpretation, ignore.case = TRUE) ~ "↓↓",
TRUE ~ "?" # Fallback if no match found (shows as ? in report)
)
kpi_parts <- c(kpi_parts, sprintf("**%s:** %s", t("Trend"), trend_compact))
trend_compact <- map_trend_to_arrow(field_kpi$Trend_Interpretation, include_text = FALSE)
# Log warning if trend value was unmatched (fallback to em-dash)
if (!is.na(trend_compact) && trend_compact == "—") {
warning(paste0("Trend_Interpretation mismatch for field '", field_name, "': '",
field_kpi$Trend_Interpretation, "' did not match any expected pattern. ",
"Check 80_utils_agronomic_support.R trend categories or map_trend_to_arrow() patterns."))
}
kpi_parts <- c(kpi_parts, sprintf("**%s:** %s", tr_key("Trend"), trend_compact))
if (!is.na(field_kpi$TCH_Forecasted) && field_kpi$TCH_Forecasted > 0) {
kpi_parts <- c(kpi_parts, sprintf("**%s:** %.1f t/ha", t("Yield"), field_kpi$TCH_Forecasted))
kpi_parts <- c(kpi_parts, sprintf("**%s:** %.1f t/ha", tr_key("Yield"), field_kpi$TCH_Forecasted))
}
kpi_parts <- c(
kpi_parts,
sprintf("**%s:** %.0f%%", t("Gaps"), field_kpi$Gap_Score),
sprintf("**%s:** %s", t("Patchiness"), t(field_kpi$Patchiness_Risk))
sprintf("**%s:** %.0f%%", tr_key("Gaps"), field_kpi$Gap_Score),
sprintf("**%s:** %s", tr_key("Patchiness"), tr_key(field_kpi$Patchiness_Risk))
)
cat(paste(kpi_parts, collapse = " | "), "\n\n") # Double newline for markdown paragraph break
@ -1568,24 +1604,44 @@ tryCatch({
})
```
`r t("detailed_field")`
`r tr_key("detailed_field")`
```{r detailed_field_table, echo=FALSE, results='asis'}
# Detailed field performance table
if (!exists("field_details_table") || is.null(field_details_table) || nrow(field_details_table) == 0) {
safe_log("No field details available for table", "WARNING")
cat(t("no_field_data"))
cat(tr_key("no_field_data"))
} else {
# Calculate field sizes from boundaries (convert to acres)
field_sizes_source <- if (exists("AllPivots_merged") && inherits(AllPivots_merged, "sf")) AllPivots_merged else AllPivots0
field_sizes_df <- field_sizes_source %>%
mutate(
field_size_acres = as.numeric(sf::st_area(geometry) / 4046.86) # m² to acres
) %>%
sf::st_drop_geometry() %>%
select(field, field_size_acres)
# Get area column from KPI data (e.g., Area_ha or Area_ac based on preference)
# If not available, calculate from boundaries
unit_label <- get_area_unit_label(AREA_UNIT_PREFERENCE)
area_col_name <- paste0("Area_", unit_label)
# Check if area column exists in field_details_table
if (area_col_name %in% names(field_details_table)) {
# Use area from KPI data
field_sizes_df <- field_details_table %>%
select(Field_id, !! area_col_name) %>%
rename(field_size_area = !! area_col_name) %>%
mutate(field = Field_id) %>%
select(field, field_size_area)
} else {
# Fallback: calculate field sizes from boundaries using unified function
field_sizes_source <- if (exists("AllPivots_merged") && inherits(AllPivots_merged, "sf")) AllPivots_merged else AllPivots0
field_sizes_df <- tryCatch({
field_areas <- calculate_area_from_geometry(field_sizes_source, unit = AREA_UNIT_PREFERENCE)
data.frame(
field = field_sizes_source$field,
field_size_area = as.numeric(field_areas),
stringsAsFactors = FALSE
)
}, error = function(e) {
safe_log(paste("Warning: Could not calculate field areas:", e$message), "WARNING")
data.frame(field = character(), field_size_area = numeric())
})
}
# Get field ages from CI quadrant if available
field_ages_df <- if (!is.null(CI_quadrant) && nrow(CI_quadrant) > 0) {
@ -1616,8 +1672,8 @@ if (!exists("field_details_table") || is.null(field_details_table) || nrow(field
mutate(
# Only show yield forecast for fields >= 240 days old
TCH_Forecasted = if_else(is.na(Age_days) | Age_days < 240, NA_real_, TCH_Forecasted),
# Round numeric columns
field_size_acres = round(field_size_acres, 1),
# Round numeric columns (make sure field_size_area is rounded before rename)
field_size_area = round(field_size_area, 1),
Mean_CI = round(Mean_CI, 2),
CV = round(CV, 2),
Gap_Score = round(Gap_Score, 2),
@ -1631,7 +1687,7 @@ if (!exists("field_details_table") || is.null(field_details_table) || nrow(field
mutate(Weekly_CI_Change = round(Weekly_CI_Change, 2)) %>%
select(
field = Field_id,
field_size = field_size_acres,
field_size = field_size_area,
mean_ci = Mean_CI,
weekly_ci_change = Weekly_CI_Change,
yield_forecast = TCH_Forecasted,
@ -1654,39 +1710,45 @@ if (!exists("field_details_table") || is.null(field_details_table) || nrow(field
)
}
# Convert trend to arrows only (no text, just symbols)
# Convert trend to arrows only (canonical shared helper)
# Translate patchiness_risk levels
field_details_clean <- field_details_clean %>%
mutate(
# Map trend categories to arrows only
trend = case_when(
grepl("Strong growth", trend, ignore.case = TRUE) ~ "↑↑",
grepl("Weak growth", trend, ignore.case = TRUE) ~ "↑",
grepl("Stable", trend, ignore.case = TRUE) ~ "→",
grepl("Weak decline", trend, ignore.case = TRUE) ~ "↓",
grepl("Strong decline", trend, ignore.case = TRUE) ~ "↓↓",
TRUE ~ trend
),
patchiness_risk = sapply(patchiness_risk, t)
# Map trend categories to arrows only using shared helper
trend = map_trend_to_arrow(trend, include_text = FALSE),
patchiness_risk = sapply(patchiness_risk, tr_key)
)
# Log warnings for any unmatched trend values (em-dash fallback)
unmatched_trends <- field_details_clean %>%
filter(trend == "—") %>%
pull(field)
if (length(unmatched_trends) > 0) {
for (field_with_mismatch in unmatched_trends) {
warning(paste0("Trend_Interpretation mismatch in field_details_table for field '",
field_with_mismatch, "': value did not match expected trend patterns. ",
"Check 80_utils_agronomic_support.R trend categories or map_trend_to_arrow() patterns."))
}
}
# Translation labels for flextable
header_labels <- list(
field = t("field"),
field_size = t("field_size"),
mean_ci = t("mean_ci"),
weekly_ci_change = t("weekly_ci_change"),
yield_forecast = t("yield_forecast"),
gap_score = t("gap_score"),
trend = t("Trend"),
patchiness_risk = t("patchiness_risk"),
cv_value = t("cv_value")
field = tr_key("field"),
field_size = tr_key("field_size"),
mean_ci = tr_key("mean_ci"),
weekly_ci_change = tr_key("weekly_ci_change"),
yield_forecast = tr_key("yield_forecast"),
gap_score = tr_key("gap_score"),
trend = tr_key("Trend"),
patchiness_risk = tr_key("patchiness_risk"),
cv_value = tr_key("cv_value")
)
# Display the cleaned field table with flextable (fit to page width)
ft <- flextable(field_details_clean) %>%
set_header_labels(values = header_labels) %>%
set_caption(t("detailed_field_caption")) %>%
set_caption(tr_key("detailed_field_caption")) %>%
theme_booktabs() %>%
set_table_properties(width = 1, layout = "autofit") # Fit to 100% page width with auto-adjust
@ -1696,7 +1758,7 @@ if (!exists("field_details_table") || is.null(field_details_table) || nrow(field
\newpage
`r t("section_iii")`
`r tr_key("section_iii")`
```{r include=FALSE}
# Tries to get the CI graph in different language, otherwise falls back on English
@ -1705,116 +1767,116 @@ img_path <- ifelse(file.exists(target_img), target_img, "CI_graph_example.png")
```
<div align="center">
![`r t("ci_caption")`](`r img_path`){width=4in}
![`r tr_key("ci_caption")`](`r img_path`){width=4in}
</div>
`r t("sec_iii_1")`
`r tr_key("sec_iii_1")`
- `r t("kpi_i")`
- `r t("kpi_i_metric")`
- `r t("kpi_i_calc")`
- `r t("kpi_categories")`
- `r t("kpi_i_excellent")`
- `r t("kpi_i_good")`
- `r t("kpi_i_accept")`
- `r t("kpi_i_poor")`
- `r t("kpi_i_verypoor")`
- `r t("kpi_i_why")`
- `r tr_key("kpi_i")`
- `r tr_key("kpi_i_metric")`
- `r tr_key("kpi_i_calc")`
- `r tr_key("kpi_categories")`
- `r tr_key("kpi_i_excellent")`
- `r tr_key("kpi_i_good")`
- `r tr_key("kpi_i_accept")`
- `r tr_key("kpi_i_poor")`
- `r tr_key("kpi_i_verypoor")`
- `r tr_key("kpi_i_why")`
- `r t("kpi_ii")`
- `r t("kpi_ii_calc")`
- `r t("kpi_categories")`
- `r t("kpi_ii_rapid")`
- `r t("kpi_ii_positive")`
- `r t("kpi_ii_stable")`
- `r t("kpi_ii_declining")`
- `r t("kpi_ii_rapid_decline")`
- `r t("kpi_ii_why")`
- `r tr_key("kpi_ii")`
- `r tr_key("kpi_ii_calc")`
- `r tr_key("kpi_categories")`
- `r tr_key("kpi_ii_rapid")`
- `r tr_key("kpi_ii_positive")`
- `r tr_key("kpi_ii_stable")`
- `r tr_key("kpi_ii_declining")`
- `r tr_key("kpi_ii_rapid_decline")`
- `r tr_key("kpi_ii_why")`
- `r t("kpi_iii")`
- `r t("kpi_iii_applies")`
- `r t("kpi_iii_method")`
- `r t("kpi_iii_input")`
- `r t("kpi_iii_output")`
- `r t("kpi_iii_why")`
- `r tr_key("kpi_iii")`
- `r tr_key("kpi_iii_applies")`
- `r tr_key("kpi_iii_method")`
- `r tr_key("kpi_iii_input")`
- `r tr_key("kpi_iii_output")`
- `r tr_key("kpi_iii_why")`
- `r t("kpi_iv")`
- `r t("kpi_iv_calc")`
- `r t("kpi_categories")`
- `r t("kpi_iv_str_improve")`
- `r t("kpi_iv_weak_improve")`
- `r t("kpi_iv_stable")`
- `r t("kpi_iv_weak_decline")`
- `r t("kpi_iv_str_decline")`
- `r t("kpi_iv_why")`
- `r tr_key("kpi_iv")`
- `r tr_key("kpi_iv_calc")`
- `r tr_key("kpi_categories")`
- `r tr_key("kpi_iv_str_improve")`
- `r tr_key("kpi_iv_weak_improve")`
- `r tr_key("kpi_iv_stable")`
- `r tr_key("kpi_iv_weak_decline")`
- `r tr_key("kpi_iv_str_decline")`
- `r tr_key("kpi_iv_why")`
- `r t("kpi_v")`
- `r t("kpi_v_met1")`
- `r t("kpi_v_form")`
- `r t("kpi_v_range")`
- `r t("kpi_v_interpretation")`
- `r t("kpi_v_met2")`
- `r t("kpi_v_met2_range")`
- `r t("kpi_v_thresh")`
- `r t("kpi_v_risk")`
- `r t("kpi_v_minimal")`
- `r t("kpi_v_low")`
- `r t("kpi_v_medium")`
- `r t("kpi_v_high")`
- `r t("kpi_v_why")`
- `r tr_key("kpi_v")`
- `r tr_key("kpi_v_met1")`
- `r tr_key("kpi_v_form")`
- `r tr_key("kpi_v_range")`
- `r tr_key("kpi_v_interpretation")`
- `r tr_key("kpi_v_met2")`
- `r tr_key("kpi_v_met2_range")`
- `r tr_key("kpi_v_thresh")`
- `r tr_key("kpi_v_risk")`
- `r tr_key("kpi_v_minimal")`
- `r tr_key("kpi_v_low")`
- `r tr_key("kpi_v_medium")`
- `r tr_key("kpi_v_high")`
- `r tr_key("kpi_v_why")`
- `r t("kpi_vi")`
- `r t("kpi_vi_calc")`
- `r t("kpi_vi_identify")`
- `r t("kpi_vi_calculates")`
- `r t("kpi_vi_example")`
- `r t("kpi_vi_scores")`
- `r t("kpi_vi_0")`
- `r t("kpi_vi_10")`
- `r t("kpi_vi_25")`
- `r t("kpi_vi_why")`
- `r tr_key("kpi_vi")`
- `r tr_key("kpi_vi_calc")`
- `r tr_key("kpi_vi_identify")`
- `r tr_key("kpi_vi_calculates")`
- `r tr_key("kpi_vi_example")`
- `r tr_key("kpi_vi_scores")`
- `r tr_key("kpi_vi_0")`
- `r tr_key("kpi_vi_10")`
- `r tr_key("kpi_vi_25")`
- `r tr_key("kpi_vi_why")`
`r t("sec_iii_2_to_6")`
`r tr_key("sec_iii_2_to_6")`
---
`r t("hist_benchmark")`
`r tr_key("hist_benchmark")`
`r t("unif_v_patch")`
`r tr_key("unif_v_patch")`
- `r t("unif")`
- `r t("patch")`
- `r tr_key("unif")`
- `r tr_key("patch")`
`r t("practical_example")`
`r tr_key("practical_example")`
- `r t("field_a")`
- `r t("field_b")`
- `r tr_key("field_a")`
- `r tr_key("field_b")`
`r t("scouting")`
`r tr_key("scouting")`
\newpage
`r t("metadata")`
`r tr_key("metadata")`
```{r report_metadata, echo=FALSE, results='asis'}
metadata_info <- data.frame(
Metric = c(t("report_gen"), t("data_source"), t("analysis_period"), t("tot_fields"), t("next_update")),
Metric = c(tr_key("report_gen"), tr_key("data_source"), tr_key("analysis_period"), tr_key("tot_fields"), tr_key("next_update")),
Value = c(
format(Sys.time(), "%Y-%m-%d %H:%M:%S"),
paste(t("project"), toupper(project_dir)),
paste(t("week"), current_week, t("of"), year),
ifelse(exists("AllPivots0"), nrow(AllPivots0 %>% filter(!is.na(field)) %>% group_by(field) %>% summarise()), t("unknown")),
t("next_wed")
paste(tr_key("project"), toupper(project_dir)),
paste(tr_key("week"), current_week, tr_key("of"), year),
ifelse(exists("AllPivots0"), nrow(AllPivots0 %>% filter(!is.na(field)) %>% group_by(field) %>% summarise()), tr_key("unknown")),
tr_key("next_wed")
)
)
# Set names of columns according to localisation
names(metadata_info) <- c(t("metric"), t("value"))
names(metadata_info) <- c(tr_key("metric"), tr_key("value"))
ft <- flextable(metadata_info) %>%
set_caption(t("metadata_caption")) %>%
set_caption(tr_key("metadata_caption")) %>%
autofit()
ft
```
`r t("disclaimer")`
`r tr_key("disclaimer")`

View file

@ -522,32 +522,44 @@ tryCatch({
}
}
# Define constants
ACRE_CONV <- 4046.856
TARGET_CRS <- 4326 # WGS84 for web basemap compatibility (was 32736 UTM)
# Define constants and unit preference
TARGET_CRS <- 4326 # WGS84 for web basemap compatibility
unit_label <- get_area_unit_label(AREA_UNIT_PREFERENCE)
# Process polygons into points
# IMPORTANT: Calculate centroids in projected CRS (UTM 36S for southern Africa) to avoid
# st_centroid warnings about longitude/latitude data, then transform back to WGS84
# IMPORTANT: Use area from field_analysis_df (calculated in script 80) rather than recalculating
# This ensures consistent unit preference across all outputs
points_processed <- field_boundaries_sf %>%
st_make_valid() %>%
# Note: Area will come from the left_join with analysis_data below
filter(
# Only include fields that exist in the analysis data
field %in% analysis_data$Field_id
) %>%
left_join(
# Get area and status from analysis data (area calculated in script 80)
analysis_data %>%
left_join(
summary_data$field_analysis %>% select(Field_id, Acreage),
by = "Field_id"
) %>%
select(Field_id, Status_trigger, Acreage),
by = c("field" = "Field_id")
) %>%
mutate(
# Calculate area, convert to numeric to strip units, divide by conversion factor
area_ac = round(as.numeric(st_area(geometry)) / ACRE_CONV, 2)
# Rename Acreage to area_value for consistency with rest of code
# (If we ever change unit preference, this will have already been calculated in the correct unit by script 80)
area_value = round(Acreage, 2)
) %>%
filter(
# Filter polygons with no surface area
!is.na(area_ac), area_ac > 0
) %>%
left_join (
# Add the status_trigger information
analysis_data %>% select(Field_id, Status_trigger),
by = c("field" = "Field_id")
# Filter polygons with no area
!is.na(area_value), area_value > 0
) %>%
st_transform(crs = 32736) %>% # UTM zone 36S (southern Africa)
st_centroid() %>%
st_transform(crs = TARGET_CRS) %>%
bind_cols(st_coordinates(.))
# Validate coordinates - check for NaN, Inf, or missing values
if (any(!is.finite(points_processed$X)) || any(!is.finite(points_processed$Y))) {
@ -591,7 +603,7 @@ tryCatch({
# Hexbin for NOT ready fields (light background)
geom_hex(
data = points_not_ready,
aes(x = X, y = Y, weight = area_ac, alpha = "Not harvest ready"),
aes(x = X, y = Y, weight = area_value, alpha = "Not harvest ready"),
binwidth = c(0.012, 0.012),
fill = "#ffffff",
colour = "#0000009a",
@ -600,13 +612,13 @@ tryCatch({
# Hexbin for READY fields (colored gradient)
geom_hex(
data = points_ready,
aes(x = X, y = Y, weight = area_ac),
aes(x = X, y = Y, weight = area_value),
binwidth = c(0.012, 0.012),
alpha = 0.9,
colour = "#0000009a",
linewidth = 0.1
) +
# Color gradient scale for acreage
# Color gradient scale for area
scale_fill_viridis_b(
option = "viridis",
direction = -1,

View file

@ -438,8 +438,15 @@
# rmarkdown::render(
rmarkdown::render(
"r_app/90_CI_report_with_kpis_agronomic_support.Rmd",
params = list(data_dir = "aura", report_date = as.Date("2026-02-18")),
output_file = "SmartCane_Report_agronomic_support_aura_2026-02-18.docx",
params = list(data_dir = "aura", report_date = as.Date("2026-02-18"), language = "en" ),
output_file = "SmartCane_Report_agronomic_support_aura_2026-02-18_en.docx",
output_dir = "laravel_app/storage/app/aura/reports"
)
rmarkdown::render(
"r_app/90_CI_report_with_kpis_agronomic_support.Rmd",
params = list(data_dir = "aura", report_date = as.Date("2026-02-18"), language = "es" ),
output_file = "SmartCane_Report_agronomic_support_aura_2026-02-18_es.docx",
output_dir = "laravel_app/storage/app/aura/reports"
)
#

View file

@ -41,6 +41,34 @@ suppressPackageStartupMessages({
# ~240 days ≈ 8 months, typical sugarcane maturity window
DAH_MATURITY_THRESHOLD <- 240
# ============================================================================
# AREA UNIT PREFERENCE
# ============================================================================
# Unit preference for area reporting: "hectare" or "acre"
# This cascades through all KPI calculations, exports, and reports
# Future: can be overridden per-project from Laravel database (preferred_area_unit column)
AREA_UNIT_PREFERENCE <- "acre" # Options: "hectare", "acre"
#' Get area unit label for display
#'
#' @param unit Character. Unit preference ("hectare" or "acre")
#' @return Character. Short label ("ha" or "ac")
#'
#' @examples
#' get_area_unit_label("hectare") # Returns "ha"
#' get_area_unit_label("acre") # Returns "ac"
get_area_unit_label <- function(unit = AREA_UNIT_PREFERENCE) {
unit_lower <- tolower(unit)
if (unit_lower == "hectare") {
return("ha")
} else if (unit_lower == "acre") {
return("ac")
} else {
warning("Unknown unit: ", unit, ". Defaulting to 'ha'")
return("ha")
}
}
# Maps project names to client types for pipeline control
# This determines which scripts run and what outputs they produce

Binary file not shown.