This commit is contained in:
2026-04-20 15:23:18 +12:00
parent 47365f7c36
commit d32a861dfb
310 changed files with 5881 additions and 0 deletions
BIN
View File
Binary file not shown.
+169
View File
@@ -0,0 +1,169 @@
# DESIGN.md — Ventia Brand Guidelines for SHEQ Tool
## Typography
**Primary font family:** Source Sans Pro
Source Sans Pro is our primary design font family used across all brand applications and designed collateral. It offers a wide range of weights in both roman and italic.
| Weight | Use Case |
|---------------|---------------------------------------|
| Bold (700) | Headings, stat callouts, table headers |
| SemiBold (600)| Sub-headings, emphasis labels |
| Regular (400) | Body text, table data, bullet points |
| Light (300) | Captions, footnotes, muted annotations |
**Fallback stack:** `"Source Sans Pro", "Source Sans 3", -apple-system, "Segoe UI", sans-serif`
### Sizing
| Element | DOCX (pt) | Web (rem) |
|--------------------|-----------|-----------|
| Report title | 28 | 2.0 |
| Section heading | 16 | 1.5 |
| Sub-heading | 13 | 1.15 |
| Body text | 11 | 1.0 |
| Table cell | 910 | 0.85 |
| Caption / footnote | 8 | 0.75 |
---
## Colour Palette
### Primary Colours
The signature colours are **Deep Blue** and **Sky Blue**. These should be the lead colours in most instances.
| Name | HEX | RGB | CMYK | PMS | Usage |
|-----------|-----------|-----------------|--------------------|----------|---------------------------------------------|
| Deep Blue | `#0b3254` | 11, 50, 84 | 100, 80, 39, 37 | PMS 540C | Headings, header bars, table headers, nav |
| Sky Blue | `#13b5ea` | 19, 181, 234 | 69, 7, 0, 0 | PMS 298C | Sub-headings, accents, links, chart highlight |
### Secondary Colours
The secondary palette allows for flexibility, versatility and personality in the brand.
| Name | HEX | RGB | CMYK | PMS | Usage |
|-------------|-----------|-----------------|--------------------|-----------|---------------------------------------------|
| Dark Green | `#006e47` | 0, 110, 71 | 100, 30, 88, 21 | PMS 7727C | Positive indicators, body part charts |
| Mid Green | `#009946` | 0, 153, 70 | 96, 10, 100, 1 | PMS 347C | Secondary positive, trend improvements |
| Light Green | `#7bc143` | 123, 193, 67 | 57, 0, 100, 0 | PMS 368C | Tertiary positive, low-severity shading |
| Purple | `#96358d` | 150, 53, 141 | 48, 94, 5, 0 | PMS 513C | Categorical distinction, chart series accent |
### Functional Colours
These are derived from the brand palette and used for semantic meaning in data visualisation and reporting.
| Role | Colour | HEX | Notes |
|---------------|-------------|-----------|--------------------------------------|
| Warning | Amber | `#d97706` | Moderate consequence, caution states |
| Critical | Red | `#dc2626` | Major/Substantial, LTI, alerts |
| Muted text | Slate grey | `#64748b` | Captions, secondary labels |
| Card background | Off-white | `#f0f5fa` | Alternating table rows, card bg |
| Page background | Near-white | `#f8fafc` | Web app body background |
| Borders | Light grey | `#e2e8f0` | Table borders, card edges |
---
## Colour Application
### DOCX Reports
- **Page header:** Deep Blue `#0b3254` underline rule
- **Heading 1:** Deep Blue `#0b3254`, Source Sans Pro Bold 16pt
- **Heading 2:** Sky Blue `#13b5ea`, Source Sans Pro SemiBold 13pt
- **Table header row:** Deep Blue `#0b3254` fill, white text
- **Alternating rows:** Off-white `#f0f5fa` / white `#ffffff`
- **Footer text:** Slate grey `#64748b`
### Web App
- **Navigation / header bar:** Deep Blue `#0b3254`
- **Primary buttons:** Deep Blue `#0b3254`
- **Secondary buttons:** Sky Blue `#13b5ea`
- **Active accent / links:** Sky Blue `#13b5ea`
- **Sidebar background:** White `#ffffff` with light grey border
- **Body background:** Near-white `#f8fafc`
### Charts & Data Visualisation
Use this sequence for multi-series charts:
```
Series 1: Deep Blue #0b3254
Series 2: Sky Blue #13b5ea
Series 3: Dark Green #006e47
Series 4: Mid Green #009946
Series 5: Light Green #7bc143
Series 6: Purple #96358d
Series 7: Amber #d97706
Series 8: Red #dc2626
```
For PD comparison charts:
- **PD 1 (prior):** Deep Blue `#0b3254`
- **PD 2 (current):** Sky Blue `#13b5ea`
For consequence severity:
- **Negligible:** Dark Green `#006e47`
- **Minor:** Amber `#d97706`
- **Moderate:** Red `#dc2626`
- **Major / Substantial:** Purple `#96358d`
---
## CSS Variables (Web App)
```css
:root {
/* Primary */
--deep-blue: #0b3254;
--sky-blue: #13b5ea;
/* Secondary */
--dark-green: #006e47;
--mid-green: #009946;
--light-green: #7bc143;
--purple: #96358d;
/* Functional */
--amber: #d97706;
--red: #dc2626;
--muted: #64748b;
--card-bg: #f0f5fa;
--page-bg: #f8fafc;
--border: #e2e8f0;
--text: #1e293b;
--white: #ffffff;
/* Typography */
--font-primary: "Source Sans Pro", "Source Sans 3", -apple-system, "Segoe UI", sans-serif;
--font-heading: var(--font-primary);
}
```
---
## Python Constants (analysis.py)
```python
# Brand colours
DEEP_BLUE = "#0b3254"
SKY_BLUE = "#13b5ea"
DARK_GREEN = "#006e47"
MID_GREEN = "#009946"
LIGHT_GREEN = "#7bc143"
PURPLE = "#96358d"
AMBER = "#d97706"
RED = "#dc2626"
CHART_PALETTE = [DEEP_BLUE, SKY_BLUE, DARK_GREEN, MID_GREEN, LIGHT_GREEN, PURPLE, AMBER, RED]
```
---
## Notes
- Source Sans Pro is available from [Google Fonts](https://fonts.google.com/specimen/Source+Sans+Pro) and should be installed locally for DOCX rendering. In the web app, import via Google Fonts CDN.
- When Source Sans Pro is unavailable in a DOCX context (e.g. recipient doesn't have it installed), the fallback is Calibri then Arial.
- Always maintain sufficient contrast — do not place Sky Blue text on white backgrounds at small sizes. Use Deep Blue for body text and Sky Blue for headings/accents only.
BIN
View File
Binary file not shown.
BIN
View File
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
+694
View File
@@ -0,0 +1,694 @@
"""
SHEQ Incident Analysis Engine
Generates charts and a DOCX report comparing two Project Director periods.
Usage:
from analysis import run_analysis
run_analysis("All_Events__5_.xlsx", "2024-01-01", "2025-04-01",
"Matthew Arthur", "Manga", output_dir="output")
"""
import os
import pandas as pd
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import numpy as np
from docx import Document
from docx.shared import Inches, Pt, Cm, RGBColor, Emu
from docx.enum.text import WD_ALIGN_PARAGRAPH
from docx.enum.table import WD_TABLE_ALIGNMENT
from docx.oxml.ns import qn, nsdecls
from docx.oxml import parse_xml
from io import BytesIO
# ── Brand Colours (see DESIGN.md) ──
# Primary
DEEP_BLUE = RGBColor(0x0B, 0x32, 0x54)
SKY_BLUE = RGBColor(0x13, 0xB5, 0xEA)
# Secondary
DARK_GREEN = RGBColor(0x00, 0x6E, 0x47)
MID_GREEN = RGBColor(0x00, 0x99, 0x46)
LIGHT_GREEN = RGBColor(0x7B, 0xC1, 0x43)
PURPLE = RGBColor(0x96, 0x35, 0x8D)
# Functional
GREY = RGBColor(0x64, 0x74, 0x8B)
# Aliases used throughout
NAVY = DEEP_BLUE
TEAL = SKY_BLUE
GREEN = DARK_GREEN
# Hex versions for matplotlib
DEEP_BLUE_HEX = "#0b3254"
SKY_BLUE_HEX = "#13b5ea"
DARK_GREEN_HEX = "#006e47"
MID_GREEN_HEX = "#009946"
LIGHT_GREEN_HEX = "#7bc143"
PURPLE_HEX = "#96358d"
AMBER_HEX = "#d97706"
RED_HEX = "#dc2626"
# Chart palette sequence per DESIGN.md
CHART_PALETTE = [DEEP_BLUE_HEX, SKY_BLUE_HEX, DARK_GREEN_HEX, MID_GREEN_HEX,
LIGHT_GREEN_HEX, PURPLE_HEX, AMBER_HEX, RED_HEX]
# PD comparison colours
MA_HEX = DEEP_BLUE_HEX # PD1 = Deep Blue
MG_HEX = SKY_BLUE_HEX # PD2 = Sky Blue
# ═══════════════════════════════════════════════
# DATA LOADING & PREPARATION
# ═══════════════════════════════════════════════
def load_and_prepare(filepath, start_date, split_date):
"""Load Excel, filter by date range, add PD column."""
df = pd.read_excel(filepath)
df["Event Date"] = pd.to_datetime(df["Event Date"])
df = df[df["Event Date"] >= pd.Timestamp(start_date)].copy()
df["Year"] = df["Event Date"].dt.year
df["Month"] = df["Event Date"].dt.month
df["MonthName"] = df["Event Date"].dt.strftime("%b")
df["DOW"] = df["Event Date"].dt.day_name()
df["YearMonth"] = df["Event Date"].dt.to_period("M")
df["PD"] = df["Event Date"].apply(
lambda x: "pd1" if x < pd.Timestamp(split_date) else "pd2"
)
return df
def get_body_parts(series):
"""Split multi-value body part entries and normalise."""
parts = []
for val in series.dropna():
for part in str(val).split(","):
part = part.strip()
if part and "unspecified" not in part.lower():
parts.append(part)
return pd.Series(parts)
# ═══════════════════════════════════════════════
# CHART GENERATION
# ═══════════════════════════════════════════════
def _save(fig, path):
fig.tight_layout()
fig.savefig(path, dpi=200, bbox_inches="tight", facecolor="white")
plt.close(fig)
def _setup_chart_style():
"""Configure matplotlib to use Source Sans Pro if available."""
import matplotlib.font_manager as fm
available = [f.name for f in fm.fontManager.ttflist]
if "Source Sans Pro" in available:
plt.rcParams["font.family"] = "Source Sans Pro"
elif "Source Sans 3" in available:
plt.rcParams["font.family"] = "Source Sans 3"
else:
plt.rcParams["font.family"] = "sans-serif"
def generate_charts(df, pd1_name, pd2_name, split_date, output_dir):
"""Generate all comparison charts, return dict of paths."""
_setup_chart_style()
charts = {}
pd1 = df[df["PD"] == "pd1"]
pd2 = df[df["PD"] == "pd2"]
# Consequence severity colours per DESIGN.md
CONS_COLORS = [DARK_GREEN_HEX, AMBER_HEX, RED_HEX, PURPLE_HEX]
# 1. Monthly trend by PD
fig, ax = plt.subplots(figsize=(10, 4))
start_period = df["Event Date"].min().to_period("M")
end_period = df["Event Date"].max().to_period("M")
months_all = pd.period_range(start_period, end_period, freq="M")
monthly = df.groupby(["YearMonth", "PD"]).size().unstack(fill_value=0).reindex(months_all, fill_value=0)
x = range(len(months_all))
labels = [m.strftime("%b %y") for m in months_all]
ma_vals = monthly.get("pd1", pd.Series(0, index=months_all)).values
mg_vals = monthly.get("pd2", pd.Series(0, index=months_all)).values
ax.bar(x, ma_vals, color=MA_HEX, label=pd1_name, width=0.7, alpha=0.9)
ax.bar(x, mg_vals, bottom=ma_vals, color=MG_HEX, label=pd2_name, width=0.7, alpha=0.9)
split_m = pd.Timestamp(split_date).to_period("M")
if split_m in months_all:
trans_idx = list(months_all).index(split_m)
ax.axvline(x=trans_idx - 0.5, color=RED_HEX, linestyle="--", linewidth=1.5, alpha=0.7)
ax.text(trans_idx - 0.3, max(max(ma_vals + mg_vals), 1) * 0.95, "PD Transition",
fontsize=9, color=RED_HEX, ha="left")
ax.set_xticks(x)
ax.set_xticklabels(labels, rotation=45, ha="right", fontsize=8)
ax.set_title("Monthly Events by Project Director", fontsize=14, fontweight="bold", color=MA_HEX)
ax.set_ylabel("Events")
ax.legend(loc="upper right")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
p = os.path.join(output_dir, "monthly_by_pd.png")
_save(fig, p)
charts["monthly_by_pd"] = p
# 2. Event type comparison
evt_types = df["Event Type"].value_counts().index[:8]
ma_evt = pd1["Event Type"].value_counts().reindex(evt_types, fill_value=0)
mg_evt = pd2["Event Type"].value_counts().reindex(evt_types, fill_value=0)
fig, ax = plt.subplots(figsize=(9, 5))
y = np.arange(len(evt_types))
h = 0.35
ax.barh(y - h / 2, ma_evt.values, h, label=pd1_name, color=MA_HEX)
ax.barh(y + h / 2, mg_evt.values, h, label=pd2_name, color=MG_HEX)
ax.set_yticks(y)
ax.set_yticklabels(evt_types, fontsize=10)
ax.invert_yaxis()
ax.set_title("Event Types by Project Director", fontsize=14, fontweight="bold", color=MA_HEX)
ax.legend()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
for i, (v1, v2) in enumerate(zip(ma_evt.values, mg_evt.values)):
ax.text(v1 + 0.2, i - h / 2, str(v1), va="center", fontsize=9, color=MA_HEX)
ax.text(v2 + 0.2, i + h / 2, str(v2), va="center", fontsize=9, color=MG_HEX)
p = os.path.join(output_dir, "event_type_by_pd.png")
_save(fig, p)
charts["event_type_by_pd"] = p
# 3. Consequence comparison (pie charts)
cons_order = ["Negligible", "Minor", "Moderate", "Major"]
fig, axes = plt.subplots(1, 2, figsize=(9, 3.5))
for ax, sub, title in zip(axes, [pd1, pd2], [pd1_name, pd2_name]):
data = sub["Actual Consequence"].value_counts().reindex(cons_order, fill_value=0)
ax.pie(data.values, labels=cons_order, autopct="%1.0f%%", colors=CONS_COLORS, startangle=140,
textprops={"fontsize": 9})
ax.set_title(title, fontsize=13, fontweight="bold", color=MA_HEX)
p = os.path.join(output_dir, "consequence_by_pd.png")
_save(fig, p)
charts["consequence_by_pd"] = p
# 4. Day of week
dow_order = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
fig, ax = plt.subplots(figsize=(9, 4))
x_arr = np.arange(len(dow_order))
w = 0.35
ma_d = pd1["DOW"].value_counts().reindex(dow_order, fill_value=0)
mg_d = pd2["DOW"].value_counts().reindex(dow_order, fill_value=0)
b1 = ax.bar(x_arr - w / 2, ma_d.values, w, label=pd1_name, color=MA_HEX)
b2 = ax.bar(x_arr + w / 2, mg_d.values, w, label=pd2_name, color=MG_HEX)
ax.set_xticks(x_arr)
ax.set_xticklabels([d[:3] for d in dow_order])
ax.set_title("Events by Day of Week", fontsize=14, fontweight="bold", color=MA_HEX)
ax.legend()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
for b in b1:
if b.get_height() > 0:
ax.text(b.get_x() + b.get_width() / 2, b.get_height() + 0.3, str(int(b.get_height())),
ha="center", fontsize=9)
for b in b2:
if b.get_height() > 0:
ax.text(b.get_x() + b.get_width() / 2, b.get_height() + 0.3, str(int(b.get_height())),
ha="center", fontsize=9)
p = os.path.join(output_dir, "dow_by_pd.png")
_save(fig, p)
charts["dow_by_pd"] = p
# 5. Root cause
rc_cats = ["External Factors", "People", "Production / Delivery", "Process", "Planning", "Providers"]
fig, ax = plt.subplots(figsize=(9, 4))
y = np.arange(len(rc_cats))
h = 0.35
ma_rc = pd1["Root Cause Category"].value_counts().reindex(rc_cats, fill_value=0)
mg_rc = pd2["Root Cause Category"].value_counts().reindex(rc_cats, fill_value=0)
ax.barh(y - h / 2, ma_rc.values, h, label=pd1_name, color=MA_HEX)
ax.barh(y + h / 2, mg_rc.values, h, label=pd2_name, color=MG_HEX)
ax.set_yticks(y)
ax.set_yticklabels(rc_cats, fontsize=10)
ax.invert_yaxis()
ax.set_title("Root Cause Categories by Project Director", fontsize=14, fontweight="bold", color=MA_HEX)
ax.legend()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
p = os.path.join(output_dir, "rootcause_by_pd.png")
_save(fig, p)
charts["rootcause_by_pd"] = p
# 6. CRP comparison
crp_all = df["CRPInvolved"].value_counts()
crp_active = crp_all[~crp_all.index.isin(["None Identified", "Under Investigation"])].head(8)
crp_cats = crp_active.index
fig, ax = plt.subplots(figsize=(9, 4.5))
y = np.arange(len(crp_cats))
ma_c = pd1["CRPInvolved"].value_counts().reindex(crp_cats, fill_value=0)
mg_c = pd2["CRPInvolved"].value_counts().reindex(crp_cats, fill_value=0)
ax.barh(y - h / 2, ma_c.values, h, label=pd1_name, color=MA_HEX)
ax.barh(y + h / 2, mg_c.values, h, label=pd2_name, color=MG_HEX)
ax.set_yticks(y)
ax.set_yticklabels(crp_cats, fontsize=9)
ax.invert_yaxis()
ax.set_title("Critical Risk Protocols by Project Director", fontsize=14, fontweight="bold", color=MA_HEX)
ax.legend()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
p = os.path.join(output_dir, "crp_by_pd.png")
_save(fig, p)
charts["crp_by_pd"] = p
# 7. Body parts
bp_series = get_body_parts(df["Bodily Location"])
if len(bp_series) > 0:
bp_top = bp_series.value_counts().head(10)
fig, ax = plt.subplots(figsize=(8, 4))
ax.barh(range(len(bp_top)), bp_top.values, color=DARK_GREEN_HEX)
ax.set_yticks(range(len(bp_top)))
ax.set_yticklabels(bp_top.index, fontsize=10)
ax.invert_yaxis()
for i, v in enumerate(bp_top.values):
ax.text(v + 0.1, i, str(v), va="center", fontsize=11, fontweight="bold")
ax.set_title("Top Injured Body Parts", fontsize=14, fontweight="bold", color=MA_HEX)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
p = os.path.join(output_dir, "body_parts.png")
_save(fig, p)
charts["body_parts"] = p
return charts
# ═══════════════════════════════════════════════
# DOCX GENERATION
# ═══════════════════════════════════════════════
def _set_cell_shading(cell, color_hex):
"""Apply background shading to a table cell."""
shading = parse_xml(f'<w:shd {nsdecls("w")} w:fill="{color_hex}" w:val="clear"/>')
cell._tc.get_or_add_tcPr().append(shading)
def _add_styled_table(doc, headers, rows, col_widths_inches):
"""Add a formatted comparison table."""
table = doc.add_table(rows=1 + len(rows), cols=len(headers))
table.alignment = WD_TABLE_ALIGNMENT.LEFT
table.style = "Table Grid"
# Header row
for i, h in enumerate(headers):
cell = table.rows[0].cells[i]
cell.text = ""
p = cell.paragraphs[0]
run = p.add_run(h)
run.bold = True
run.font.size = Pt(9)
run.font.color.rgb = RGBColor(0xFF, 0xFF, 0xFF)
run.font.name = "Source Sans Pro"
_set_cell_shading(cell, "0b3254")
# Data rows
for ri, row in enumerate(rows):
for ci, val in enumerate(row):
cell = table.rows[ri + 1].cells[ci]
cell.text = ""
p = cell.paragraphs[0]
run = p.add_run(str(val))
run.font.size = Pt(9)
run.font.name = "Source Sans Pro"
bg = "F0F5FA" if ri % 2 == 0 else "FFFFFF"
_set_cell_shading(cell, bg)
# Set column widths
for i, w in enumerate(col_widths_inches):
for row in table.rows:
row.cells[i].width = Inches(w)
return table
def generate_docx(df, pd1_name, pd2_name, split_date, charts, output_dir):
"""Generate the full DOCX report."""
doc = Document()
# Set default font
style = doc.styles["Normal"]
style.font.name = "Source Sans Pro"
style.font.size = Pt(11)
# Heading styles
for level, size, color in [(1, 16, NAVY), (2, 13, TEAL)]:
hs = doc.styles[f"Heading {level}"]
hs.font.name = "Source Sans Pro"
hs.font.size = Pt(size)
hs.font.color.rgb = color
hs.font.bold = True
pd1 = df[df["PD"] == "pd1"]
pd2 = df[df["PD"] == "pd2"]
total = len(df)
pd1_months = max(1, (pd.Timestamp(split_date) - df["Event Date"].min()).days / 30.44)
pd2_months = max(1, (df["Event Date"].max() - pd.Timestamp(split_date)).days / 30.44 + 1)
pd1_start = pd1["Event Date"].min().strftime("%b %Y") if len(pd1) > 0 else "N/A"
pd1_end = pd1["Event Date"].max().strftime("%b %Y") if len(pd1) > 0 else "N/A"
pd2_start = pd2["Event Date"].min().strftime("%b %Y") if len(pd2) > 0 else "N/A"
pd2_end = pd2["Event Date"].max().strftime("%b %Y") if len(pd2) > 0 else "N/A"
# ── Title page ──
doc.add_paragraph("")
doc.add_paragraph("")
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run("SHEQ Incident Analysis")
run.font.size = Pt(28)
run.bold = True
run.font.name = "Source Sans Pro"
run.font.color.rgb = NAVY
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run("Far North Waters Project")
run.font.size = Pt(16)
run.font.name = "Source Sans Pro"
run.font.color.rgb = TEAL
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run(f"{pd1_start} \u2013 {pd2_end} (MTD)")
run.font.size = Pt(14)
run.font.name = "Source Sans Pro"
run.font.color.rgb = TEAL
doc.add_paragraph("")
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run("Performance by Project Director")
run.font.size = Pt(13)
run.bold = True
run.font.name = "Source Sans Pro"
run.font.color.rgb = NAVY
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run(f"{pd1_name} ")
run.bold = True
run.font.color.rgb = NAVY
run = p.add_run(f"({pd1_start} \u2013 {pd1_end}) | ")
run.font.color.rgb = GREY
run = p.add_run(f"{pd2_name} ")
run.bold = True
run.font.color.rgb = TEAL
run = p.add_run(f"({pd2_start} \u2013 {pd2_end})")
run.font.color.rgb = GREY
doc.add_paragraph("")
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run("Ventia \u2022 Infrastructure Services \u2022 Water & Environmental Services")
run.font.size = Pt(10)
run.font.color.rgb = GREY
doc.add_page_break()
# ── Helper functions ──
def h1(text):
doc.add_heading(text, level=1)
def h2(text):
doc.add_heading(text, level=2)
def text(t, bold=False):
p = doc.add_paragraph()
run = p.add_run(t)
run.bold = bold
return p
def bullet(t):
p = doc.add_paragraph(t, style="List Bullet")
return p
def add_chart(name, width=5.5):
if name in charts:
doc.add_picture(charts[name], width=Inches(width))
# Helper for injury classification
def _inj_class(sub):
return sub["Ventia Injury Classification"].value_counts()
# ═══════════════════════════════════════════
# 1. EXECUTIVE SUMMARY
# ═══════════════════════════════════════════
h1("1. Executive Summary")
text(f"This report analyses {total} SHEQ events recorded for the Far North Waters project "
f"from {pd1_start} to {pd2_end} (month-to-date). The analysis is structured around "
f"two Project Director tenures to enable performance comparison:")
pd1_inj = pd1[pd1["Event Type"] == "Injury/Illness Sustained"]
pd2_inj = pd2[pd2["Event Type"] == "Injury/Illness Sustained"]
pd1_mv = pd1[pd1["Event Type"] == "Motor Vehicle"]
pd2_mv = pd2[pd2["Event Type"] == "Motor Vehicle"]
pd1_ic = _inj_class(pd1)
pd2_ic = _inj_class(pd2)
pd1_cc = len(pd1[pd1["Event Type"] == "Close Call"])
pd2_cc = len(pd2[pd2["Event Type"] == "Close Call"])
pd1_mod = len(pd1[pd1["Actual Consequence"].isin(["Moderate", "Major", "Substantial"])])
pd2_mod = len(pd2[pd2["Actual Consequence"].isin(["Moderate", "Major", "Substantial"])])
_add_styled_table(doc,
["", pd1_name, pd2_name],
[
["Period", f"{pd1_start} \u2013 {pd1_end}", f"{pd2_start} \u2013 {pd2_end}"],
["Duration", f"{pd1_months:.0f} months", f"{pd2_months:.0f} months"],
["Total Events", str(len(pd1)), str(len(pd2))],
["Events per Month", f"{len(pd1)/pd1_months:.1f}", f"{len(pd2)/pd2_months:.1f}"],
["Injuries", f"{len(pd1_inj)} ({len(pd1_inj)/max(len(pd1),1)*100:.1f}%)",
f"{len(pd2_inj)} ({len(pd2_inj)/max(len(pd2),1)*100:.1f}%)"],
["Motor Vehicle Events", f"{len(pd1_mv)} ({len(pd1_mv)/max(len(pd1),1)*100:.1f}%)",
f"{len(pd2_mv)} ({len(pd2_mv)/max(len(pd2),1)*100:.1f}%)"],
["Lost Time Injuries", str(pd1_ic.get("Lost Time Injury", 0)), str(pd2_ic.get("Lost Time Injury", 0))],
["First Aid Treatments", str(pd1_ic.get("First Aid Treatment", 0)), str(pd2_ic.get("First Aid Treatment", 0))],
["Close Calls", f"{pd1_cc} ({pd1_cc/max(len(pd1),1)*100:.1f}%)",
f"{pd2_cc} ({pd2_cc/max(len(pd2),1)*100:.1f}%)"],
["Moderate+ Consequence", f"{pd1_mod} ({pd1_mod/max(len(pd1),1)*100:.1f}%)",
f"{pd2_mod} ({pd2_mod/max(len(pd2),1)*100:.1f}%)"],
["Median Days to Investigate", f"{pd1['Days to Investigate'].dropna().median():.0f}",
f"{pd2['Days to Investigate'].dropna().median():.0f}"],
["Median Days to Close", f"{pd1['Days to Close'].dropna().median():.0f}",
f"{pd2['Days to Close'].dropna().median():.0f}"],
],
[2.0, 2.2, 2.3]
)
doc.add_paragraph("")
h2("Key Comparative Findings")
rate1 = len(pd1) / pd1_months
rate2 = len(pd2) / pd2_months
bullet(f"Event rate {'increased' if rate2 > rate1 else 'decreased'} under {pd2_name} "
f"({rate2:.1f}/month vs {rate1:.1f}/month), with Moderate+ consequences at "
f"{pd2_mod/max(len(pd2),1)*100:.1f}% vs {pd1_mod/max(len(pd1),1)*100:.1f}%.")
bullet(f"Motor vehicle events: {len(pd2_mv)} under {pd2_name} vs {len(pd1_mv)} under {pd1_name} "
f"({len(pd2_mv)/max(len(pd2),1)*100:.1f}% vs {len(pd1_mv)/max(len(pd1),1)*100:.1f}%).")
bullet(f"Close call reporting: {pd2_cc/max(len(pd2),1)*100:.1f}% under {pd2_name} vs "
f"{pd1_cc/max(len(pd1),1)*100:.1f}% under {pd1_name}.")
lti1 = pd1_ic.get("Lost Time Injury", 0)
lti2 = pd2_ic.get("Lost Time Injury", 0)
if lti2 > lti1:
bullet(f"{lti2} Lost Time Injuries under {pd2_name} compared to {lti1} under {pd1_name}.")
doc.add_page_break()
# ═══════════════════════════════════════════
# 2. MONTHLY TRENDS
# ═══════════════════════════════════════════
h1("2. Monthly Event Trends")
text("The chart below shows monthly event counts across both Project Director periods.")
add_chart("monthly_by_pd", 5.8)
doc.add_page_break()
# ═══════════════════════════════════════════
# 3. EVENT TYPE COMPARISON
# ═══════════════════════════════════════════
h1("3. Event Type Comparison")
add_chart("event_type_by_pd", 5.5)
evt_types = df["Event Type"].value_counts().index
evt_rows = []
for e in evt_types:
c1 = len(pd1[pd1["Event Type"] == e])
c2 = len(pd2[pd2["Event Type"] == e])
evt_rows.append([e, str(c1), f"{c1/max(len(pd1),1)*100:.1f}%",
str(c2), f"{c2/max(len(pd2),1)*100:.1f}%"])
_add_styled_table(doc, ["Event Type", pd1_name, "%", pd2_name, "%"], evt_rows,
[2.0, 1.1, 0.8, 1.0, 0.8])
doc.add_paragraph("")
text("Notable shifts:", bold=True)
# Auto-detect biggest shifts
for e in evt_types:
c1 = len(pd1[pd1["Event Type"] == e])
c2 = len(pd2[pd2["Event Type"] == e])
pct1 = c1 / max(len(pd1), 1) * 100
pct2 = c2 / max(len(pd2), 1) * 100
if abs(pct2 - pct1) > 5:
direction = "increased" if pct2 > pct1 else "decreased"
bullet(f"{e} {direction}: {pct1:.1f}% \u2192 {pct2:.1f}% ({c1} \u2192 {c2} events).")
doc.add_page_break()
# ═══════════════════════════════════════════
# 4. INJURY ANALYSIS
# ═══════════════════════════════════════════
h1("4. Injury Analysis")
h2("4.1 Injury Classification")
inj_classes = ["First Aid Treatment", "Report Only", "Non-Work Related",
"Lost Time Injury", "Medical Treatment Injury"]
inj_rows = [[c, str(pd1_ic.get(c, 0)), str(pd2_ic.get(c, 0))] for c in inj_classes]
_add_styled_table(doc, ["Classification", pd1_name, pd2_name], inj_rows, [2.5, 1.8, 1.8])
h2("4.2 Body Parts Injured")
add_chart("body_parts", 5.0)
# Body part comparison
bp1 = get_body_parts(pd1["Bodily Location"]).value_counts().head(6)
bp2 = get_body_parts(pd2["Bodily Location"]).value_counts().head(6)
all_bp = list(dict.fromkeys(list(bp1.index) + list(bp2.index)))[:8]
bp_rows = [[bp, str(bp1.get(bp, 0)), str(bp2.get(bp, 0))] for bp in all_bp]
_add_styled_table(doc, ["Body Part", pd1_name, pd2_name], bp_rows, [2.5, 1.8, 1.8])
doc.add_page_break()
# ═══════════════════════════════════════════
# 5. CONSEQUENCE ANALYSIS
# ═══════════════════════════════════════════
h1("5. Consequence Analysis")
add_chart("consequence_by_pd", 5.5)
cons_order = ["Negligible", "Minor", "Moderate", "Major"]
cons_rows = []
for c in cons_order:
c1 = len(pd1[pd1["Actual Consequence"] == c])
c2 = len(pd2[pd2["Actual Consequence"] == c])
cons_rows.append([c, str(c1), f"{c1/max(len(pd1),1)*100:.1f}%",
str(c2), f"{c2/max(len(pd2),1)*100:.1f}%"])
_add_styled_table(doc, ["Consequence", pd1_name, "%", pd2_name, "%"], cons_rows,
[1.5, 1.0, 0.8, 1.0, 0.8])
doc.add_page_break()
# ═══════════════════════════════════════════
# 6. CRP & ROOT CAUSE
# ═══════════════════════════════════════════
h1("6. Critical Risk Protocols & Root Causes")
h2("6.1 CRP Comparison")
add_chart("crp_by_pd", 5.5)
h2("6.2 Root Cause Comparison")
add_chart("rootcause_by_pd", 5.5)
rc_cats = ["External Factors", "People", "Production / Delivery", "Process", "Planning", "Providers"]
rc_rows = []
for r in rc_cats:
c1 = len(pd1[pd1["Root Cause Category"] == r])
c2 = len(pd2[pd2["Root Cause Category"] == r])
t1 = pd1["Root Cause Category"].notna().sum()
t2 = pd2["Root Cause Category"].notna().sum()
rc_rows.append([r, str(c1), f"{c1/max(t1,1)*100:.1f}%",
str(c2), f"{c2/max(t2,1)*100:.1f}%"])
_add_styled_table(doc, ["Root Cause", pd1_name, "%", pd2_name, "%"], rc_rows,
[2.0, 1.1, 0.8, 1.0, 0.8])
doc.add_page_break()
# ═══════════════════════════════════════════
# 7. TIMING PATTERNS
# ═══════════════════════════════════════════
h1("7. Timing Patterns")
add_chart("dow_by_pd", 5.5)
doc.add_page_break()
# ═══════════════════════════════════════════
# 8. INVESTIGATION PERFORMANCE
# ═══════════════════════════════════════════
h1("8. Investigation Performance")
inv_rows = [
["Median Days to Investigate", f"{pd1['Days to Investigate'].dropna().median():.0f}",
f"{pd2['Days to Investigate'].dropna().median():.0f}"],
["Mean Days to Investigate", f"{pd1['Days to Investigate'].dropna().mean():.1f}",
f"{pd2['Days to Investigate'].dropna().mean():.1f}"],
["Median Days to Close", f"{pd1['Days to Close'].dropna().median():.0f}",
f"{pd2['Days to Close'].dropna().median():.0f}"],
["Mean Days to Close", f"{pd1['Days to Close'].dropna().mean():.1f}",
f"{pd2['Days to Close'].dropna().mean():.1f}"],
["Events Closed", f"{(pd1['Status']=='Closed').sum()} ({(pd1['Status']=='Closed').sum()/max(len(pd1),1)*100:.0f}%)",
f"{(pd2['Status']=='Closed').sum()} ({(pd2['Status']=='Closed').sum()/max(len(pd2),1)*100:.0f}%)"],
["Events Open", str((pd1["Status"] == "Open").sum()), str((pd2["Status"] == "Open").sum())],
]
_add_styled_table(doc, ["Metric", pd1_name, pd2_name], inv_rows, [2.5, 1.8, 1.8])
doc.add_page_break()
# ═══════════════════════════════════════════
# 9. RECOMMENDATIONS
# ═══════════════════════════════════════════
h1("9. Key Findings & Recommendations")
h2(f"9.1 Areas Requiring Attention ({pd2_name} Period)")
if len(pd2_mv) > len(pd1_mv):
bullet("Motor vehicle events have increased \u2014 reinforce journey management plans and reversing protocols.")
if pd2_mod / max(len(pd2), 1) > pd1_mod / max(len(pd1), 1):
bullet("Moderate+ consequence events have increased \u2014 investigate whether controls are being bypassed.")
if pd2_cc / max(len(pd2), 1) < pd1_cc / max(len(pd1), 1):
bullet("Close call reporting has declined \u2014 implement reporting targets and recognise reporters.")
if lti2 > lti1:
bullet(f"{lti2} LTIs under {pd2_name} vs {lti1} under {pd1_name} \u2014 review circumstances and RTW processes.")
h2("9.2 Systemic Issues (Both Periods)")
bullet("Lower back injuries from manual handling at pump stations persist \u2014 engineering controls needed.")
bullet("Third Party/Public Liability events remain a large category, driven by aging infrastructure.")
bullet("Wednesday remains the peak risk day \u2014 consider targeted mid-week safety interventions.")
h2("9.3 Recommended Actions")
bullet("Set a close-call reporting KPI (minimum 10% of all events) and track monthly.")
bullet("Implement a motor vehicle safety campaign focusing on reversing and traffic management.")
bullet("Schedule quarterly PD safety performance reviews using this report format.")
# ── Save ──
output_path = os.path.join(output_dir, "SHEQ_PD_Comparison.docx")
doc.save(output_path)
return output_path
# ═══════════════════════════════════════════════
# MAIN ENTRY POINT
# ═══════════════════════════════════════════════
def run_analysis(filepath, start_date, split_date, pd1_name, pd2_name, output_dir="output"):
"""Run the full analysis pipeline."""
os.makedirs(output_dir, exist_ok=True)
print(f"Loading data from {filepath}...")
df = load_and_prepare(filepath, start_date, split_date)
print(f" {len(df)} events loaded ({df['Event Date'].min().date()} to {df['Event Date'].max().date()})")
print(f" {pd1_name}: {(df['PD']=='pd1').sum()} events")
print(f" {pd2_name}: {(df['PD']=='pd2').sum()} events")
print("Generating charts...")
charts = generate_charts(df, pd1_name, pd2_name, split_date, output_dir)
print(f" {len(charts)} charts created")
print("Generating DOCX report...")
docx_path = generate_docx(df, pd1_name, pd2_name, split_date, charts, output_dir)
print(f" Report saved to {docx_path}")
return docx_path
if __name__ == "__main__":
run_analysis(
filepath="All_Events__5_.xlsx",
start_date="2024-01-01",
split_date="2025-04-01",
pd1_name="Matthew Arthur",
pd2_name="Manga",
output_dir="output"
)
+2251
View File
File diff suppressed because it is too large Load Diff
+296
View File
@@ -0,0 +1,296 @@
"""
app.py — SHEQ Analysis Tool — Flask web application.
Run:
python app.py
Then open http://localhost:5000
The sidebar has two sections:
1. Events Explorer — filter and chart Events data interactively.
2. Generate Report — run the full analysis across Events, Safety Energy,
and LLC Data and download a comprehensive DOCX report.
"""
from __future__ import annotations
import logging
import os
from datetime import datetime
import pandas as pd
from flask import Flask, jsonify, render_template, request, send_file
from config import (
EVENTS_FILE, LLC_FILE, SAFETY_ENERGY_FILE,
DEFAULT_PD1_NAME, DEFAULT_PD2_NAME,
DEFAULT_START_DATE, DEFAULT_SPLIT_DATE,
OUTPUT_DIR,
)
from data_loader import load_all, get_body_parts, load_and_prepare
from analysis_engine import run_full_analysis
from report_builder import build_report
from ppt_builder import build_presentation
# ── Logging ──────────────────────────────────────────────────────────────────
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
datefmt="%H:%M:%S",
)
log = logging.getLogger("app")
app = Flask(__name__)
# ── Cached raw DataFrames (loaded on first request) ──────────────────────────
_CACHE: dict[str, pd.DataFrame | None] = {
"events": None,
}
def _get_events_df() -> pd.DataFrame:
"""Return the raw Events DataFrame, loading from disk on first call."""
if _CACHE["events"] is None:
log.info("Loading Events from %s", EVENTS_FILE)
df = pd.read_excel(EVENTS_FILE)
# Normalise date column — handle "Monday, 25 March 2024" and ISO formats
date_col = "EventDate" if "EventDate" in df.columns else "Event Date"
df["_date"] = df[date_col].apply(_parse_one_date)
_CACHE["events"] = df
return _CACHE["events"].copy()
def _parse_one_date(val) -> pd.Timestamp:
if pd.isna(val):
return pd.NaT
s = str(val).strip()
if "," in s and len(s.split(",")[0].split()) == 1:
s = s.split(",", 1)[1].strip()
try:
return pd.to_datetime(s, dayfirst=True)
except Exception:
return pd.NaT
# ─────────────────────────────────────────────────────────────────────────────
# Web UI
# ─────────────────────────────────────────────────────────────────────────────
@app.route("/")
def index():
df = _get_events_df()
min_date = df["_date"].min().strftime("%Y-%m-%d")
max_date = df["_date"].max().strftime("%Y-%m-%d")
# Handle both column name variants
evt_col = "EventType" if "EventType" in df.columns else "Event Type"
cons_col = "Actual Consequence"
event_types = sorted(df[evt_col].dropna().unique().tolist())
consequences = sorted(df[cons_col].dropna().unique().tolist())
return render_template(
"index.html",
min_date=min_date,
max_date=max_date,
event_types=event_types,
consequences=consequences,
total_events=len(df),
)
# ─────────────────────────────────────────────────────────────────────────────
# Events Explorer API
# ─────────────────────────────────────────────────────────────────────────────
@app.route("/api/filter", methods=["POST"])
def api_filter():
"""Return filtered summary stats as JSON for the Events Explorer."""
params = request.json or {}
df = _get_events_df()
evt_col = "EventType" if "EventType" in df.columns else "Event Type"
cons_col = "Actual Consequence"
crp_col = "CRP Involved" if "CRP Involved" in df.columns else "CRPInvolved"
rc_col = "Root Cause Category"
inj_col = "Ventia Injury Classification"
bp_col = "Bodily Location"
# Filters
if params.get("start_date"):
df = df[df["_date"] >= pd.Timestamp(params["start_date"])]
if params.get("end_date"):
df = df[df["_date"] <= pd.Timestamp(params["end_date"])]
if params.get("event_types"):
df = df[df[evt_col].isin(params["event_types"])]
if params.get("consequences"):
df = df[df[cons_col].isin(params["consequences"])]
if len(df) == 0:
return jsonify({"error": "No events match the selected filters.", "total": 0})
# Summary stats
evt_counts = df[evt_col].value_counts().to_dict()
cons_counts = df[cons_col].value_counts().to_dict()
inj_class = (
df[inj_col].value_counts().to_dict()
if inj_col in df.columns else {}
)
dow_order = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
dow = (
df["_date"].dt.day_name().value_counts()
.reindex(dow_order, fill_value=0).to_dict()
)
monthly = df.groupby(df["_date"].dt.to_period("M")).size()
monthly_data = {str(k): int(v) for k, v in monthly.items()}
bp = (
get_body_parts(df[bp_col]).value_counts().head(10).to_dict()
if bp_col in df.columns else {}
)
rc = (
df[rc_col].value_counts().to_dict()
if rc_col in df.columns else {}
)
crp: dict = {}
if crp_col in df.columns:
crp = df[crp_col].value_counts().to_dict()
crp.pop("None Identified", None)
crp.pop("Under Investigation", None)
# Investigation performance — use available columns
lag_col = next((c for c in ("Days to Investigate", "Event Lag", "Days to Enter")
if c in df.columns), None)
close_col = "Days to Close" if "Days to Close" in df.columns else None
inv_med = df[lag_col].dropna().median() if lag_col else None
close_med = df[close_col].dropna().median() if close_col else None
return jsonify({
"total": len(df),
"date_range": (
f"{df['_date'].min().strftime('%d %b %Y')} "
f"\u2013 {df['_date'].max().strftime('%d %b %Y')}"
),
"event_types": evt_counts,
"consequences": cons_counts,
"injury_classification": inj_class,
"day_of_week": dow,
"monthly": monthly_data,
"body_parts": bp,
"root_causes": rc,
"crp": crp,
"median_investigate_days": round(inv_med, 1) if inv_med and pd.notna(inv_med) else None,
"median_close_days": round(close_med, 1) if close_med and pd.notna(close_med) else None,
"closed_pct": round(
(df["Status"] == "Closed").sum() / len(df) * 100, 1
) if "Status" in df.columns else None,
})
# ─────────────────────────────────────────────────────────────────────────────
# Comprehensive Report API
# ─────────────────────────────────────────────────────────────────────────────
@app.route("/api/generate_full_report", methods=["POST"])
def api_generate_full_report():
"""
Load all three data sources, run the full analysis pipeline, build the
DOCX report, and return it as a file download.
"""
params = request.json or {}
start_date = params.get("start_date", DEFAULT_START_DATE)
export_format = str(params.get("export_format", "docx")).lower()
if export_format not in {"docx", "pptx"}:
return jsonify({"success": False, "error": "Supported export formats are DOCX and PPTX."}), 400
events_path = params.get("events_file", EVENTS_FILE)
se_path = params.get("safety_energy_file", SAFETY_ENERGY_FILE)
llc_path = params.get("llc_file", LLC_FILE)
run_id = datetime.now().strftime("%Y%m%d_%H%M%S")
run_dir = os.path.join(OUTPUT_DIR, run_id)
try:
log.info("Starting full report generation (run_id=%s)", run_id)
data = load_all(events_path, se_path, llc_path)
results = run_full_analysis(
events = data["events"],
safety_energy = data["safety_energy"],
llc = data["llc"],
start_date = start_date,
split_date = DEFAULT_SPLIT_DATE,
pd1_name = DEFAULT_PD1_NAME,
pd2_name = DEFAULT_PD2_NAME,
output_dir = run_dir,
)
if export_format == "pptx":
report_path = build_presentation(results, run_dir)
download_name = f"SHEQ_Safety_Performance_{run_id}.pptx"
mimetype = "application/vnd.openxmlformats-officedocument.presentationml.presentation"
else:
report_path = build_report(results, run_dir)
download_name = f"SHEQ_Safety_Performance_{run_id}.docx"
mimetype = "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
log.info("Report ready: %s", report_path)
return send_file(
report_path,
as_attachment=True,
download_name=download_name,
mimetype=mimetype,
)
except FileNotFoundError as e:
log.error("File not found: %s", e)
return jsonify({"success": False, "error": str(e)}), 404
except Exception as e:
log.exception("Report generation failed")
return jsonify({"success": False, "error": str(e)}), 500
# ── Legacy endpoint (kept for backwards compatibility) ────────────────────────
@app.route("/api/download_report", methods=["POST"])
def api_download_report():
"""Legacy Events-only PD comparison report (preserved from v1)."""
params = request.json or {}
start_date = params.get("start_date", DEFAULT_START_DATE)
split_date = params.get("split_date", DEFAULT_SPLIT_DATE)
pd1_name = params.get("pd1_name", DEFAULT_PD1_NAME)
pd2_name = params.get("pd2_name", DEFAULT_PD2_NAME)
run_dir = os.path.join(OUTPUT_DIR, datetime.now().strftime("%Y%m%d_%H%M%S"))
try:
from analysis import run_analysis
docx_path = run_analysis(
EVENTS_FILE, start_date, split_date, pd1_name, pd2_name, run_dir
)
return send_file(
docx_path, as_attachment=True,
download_name="SHEQ_PD_Comparison.docx",
)
except Exception as e:
log.exception("Legacy report generation failed")
return jsonify({"success": False, "error": str(e)}), 500
# ─────────────────────────────────────────────────────────────────────────────
# Entry point
# ─────────────────────────────────────────────────────────────────────────────
if __name__ == "__main__":
os.makedirs(OUTPUT_DIR, exist_ok=True)
log.info("SHEQ Analysis Tool starting on http://localhost:5000")
log.info(" Events: %s", EVENTS_FILE)
log.info(" Safety Energy: %s", SAFETY_ENERGY_FILE)
log.info(" LLC Data: %s", LLC_FILE)
app.run(debug=True, port=5000)
+175
View File
@@ -0,0 +1,175 @@
"""
config.py — Central configuration for the SHEQ Analysis Tool.
Holds file paths, column name mappings, activity type definitions,
severity orders, and brand constants. Edit this file when source
column names change; do not touch the analysis or report modules.
"""
from __future__ import annotations
import os
# ── Default file paths (resolved relative to this file's directory) ──────────
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
EVENTS_FILE = os.environ.get("SHEQ_EVENTS_FILE", os.path.join(BASE_DIR, "Events.xlsx"))
LLC_FILE = os.environ.get("SHEQ_LLC_FILE", os.path.join(BASE_DIR, "LLC_Data.xlsx"))
SAFETY_ENERGY_FILE = os.environ.get("SHEQ_SE_FILE", os.path.join(BASE_DIR, "Safety_Energy.xlsx"))
OUTPUT_DIR = os.environ.get("SHEQ_OUTPUT_DIR", os.path.join(BASE_DIR, "output"))
# ── Events.xlsx column mapping ─────────────────────────────────────────────────
# Maps a normalised internal name → list of candidate column names in order
# of preference. data_loader picks the first match it finds.
EVENTS_COL_MAP: dict[str, list[str]] = {
"date": ["EventDate", "Event Date", "Date"],
"event_type": ["EventType", "Event Type"],
"consequence": ["Actual Consequence"],
"potential": ["Potential Consequence"],
"status": ["Status"],
"business_unit": ["Business Unit"],
"project": ["Project"],
"location": ["Location", "Location.1"],
"crp": ["CRP Involved", "CRPInvolved"],
"root_cause_cat": ["Root Cause Category"],
"root_cause_sub": ["Root Cause Sub-Category"],
"injury_class": ["Ventia Injury Classification"],
"body_part": ["Bodily Location"],
"brief_desc": ["Brief Description"],
"event_desc": ["Event Description"],
"days_to_enter": ["Days to Enter"],
"event_lag": ["Event Lag"],
"report_lag": ["Report Lag"],
"investigation_done":["Investigation Completed"],
"hipo": ["HiPo"],
"critical_event": ["Critical Event"],
}
# ── Safety_Energy.xlsx column mapping ─────────────────────────────────────────
SE_COL_MAP: dict[str, list[str]] = {
"date": ["EventDate", "Date Conducted", "CompletedDate"],
"module_name": ["ModuleName"],
"module_prefix": ["ModulePrefix"],
"module_type": ["ModuleType"],
"leader": ["CompletedByName", "Conducted By"],
"business_unit": ["Business Unit"],
"project": ["Project"],
"location": ["Location", "Specific Location"],
"shift": ["Shift"],
"at_risk_aspects":["At Risk Aspects"],
"total_questions":["Total Questions"],
"actions": ["Actions"],
"atl_actions": ["ATL Actions"],
"at_risk_crp": ["At risk CRP"],
"llc_topic": ["LLC Topic"],
"at_risk_obs": ["At risk situation/observation"],
"positive_obs": ["Positive Observation"],
"find_fix": ["Find & Fix", "Find&Fix"],
"participants": ["Number of people spoken to", "Participants"],
"time_spent": ["Time Spent on LLC"],
}
# ── LLC_Data.xlsx column mapping ───────────────────────────────────────────────
LLC_COL_MAP: dict[str, list[str]] = {
"date": ["EventDate", "Date Conducted", "Date"],
"topic": ["LLC Topic"],
"leader": ["Conducted by"],
"business_unit": ["Business Unit"],
"project": ["Project"],
"location": ["Location", "Specific Location"],
"crp_focus": ["CRP in Focus"],
"at_risk_obs": ["At risk situation/observation"],
"positive_obs": ["Positive Observation"],
"at_risk_flag": ["At risk work practices observed"],
"participants": ["Participants"],
"find_fix": ["Find&Fix", "Find & Fix"],
"review_action": ["Review & Action"],
"shift": ["Shift"],
}
# ── Activity type normalisation ────────────────────────────────────────────────
# Safety_Energy ModuleType values → display label
MODULE_TYPE_LABELS: dict[str, str] = {
"Leader Learning Conversation": "LLC",
"Critical Control Check": "CCC",
"Operational Control Check": "OCC",
}
# Canonical leading-activity types used throughout the report
LEADING_ACTIVITY_TYPES = ["LLC", "CCC", "OCC"]
# NOTE on duplicate "OCC" label:
# In some legacy notes and older exports the label "OCC" appeared for items
# that are now split into "CCC" (Critical Control Check) and "OCC"
# (Operational Control Check). In the current Safety_Energy export both
# CCC and OCC are already correctly separated via ModuleType. The LLC_Data
# export contains only LLC-type records. No manual deduplication is
# required; however we collapse all three under "Safety Energy" when
# computing the combined domain total.
# ── Consequence severity ordering (low → high) ────────────────────────────────
CONSEQUENCE_ORDER = ["Negligible", "Minor", "Moderate", "Major", "Substantial"]
CONSEQUENCE_SERIOUS = {"Moderate", "Major", "Substantial"}
# ── Brand colours (hex) per DESIGN.md ─────────────────────────────────────────
DEEP_BLUE = "#0b3254"
SKY_BLUE = "#13b5ea"
DARK_GREEN = "#006e47"
MID_GREEN = "#009946"
LIGHT_GREEN = "#7bc143"
PURPLE = "#96358d"
AMBER = "#d97706"
RED = "#dc2626"
MUTED = "#64748b"
CARD_BG = "#f0f5fa"
PAGE_BG = "#f8fafc"
BORDER = "#e2e8f0"
CHART_PALETTE = [DEEP_BLUE, SKY_BLUE, DARK_GREEN, MID_GREEN,
LIGHT_GREEN, PURPLE, AMBER, RED]
# Activity type → colour mapping for charts
ACTIVITY_COLOURS: dict[str, str] = {
"LLC": DEEP_BLUE,
"CCC": SKY_BLUE,
"OCC": DARK_GREEN,
}
# ── Report defaults ────────────────────────────────────────────────────────────
DEFAULT_START_DATE = "2024-01-01"
DEFAULT_SPLIT_DATE = "2025-04-01"
DEFAULT_PD1_NAME = "Matthew Arthur"
DEFAULT_PD2_NAME = "Manga"
# Minimum activity count for a leader to be included in focus tables
LEADER_MIN_ACTIVITIES = 5
# Correlation: minimum month-count required before reporting a correlation
CORR_MIN_MONTHS = 4
# Rolling window used for deeper Safety Energy trend analysis
TWO_YEAR_WINDOW_MONTHS = 24
# Quality scoring bands for leading-activity records
QUALITY_SCORE_BANDS = {
"high_value": 70,
"meaningful": 55,
"shallow": 35,
}
# Keyword groups for at-risk theme extraction from free-text fields
AT_RISK_KEYWORDS: dict[str, list[str]] = {
"Manual Handling": ["manual handling", "lifting", "carrying", "musculoskeletal", "msd"],
"Working at Height": ["height", "ladder", "scaffold", "fall", "elevated"],
"Traffic/MVA": ["vehicle", "traffic", "driving", "reversing", "motor", "mva", "collision"],
"Hazardous Energy": ["energy", "electrical", "isolation", "loto", "stored energy", "pressure"],
"Slips/Trips/Falls": ["slip", "trip", "fall", "housekeeping", "wet floor", "uneven"],
"PPE": ["ppe", "personal protective", "helmet", "harness", "gloves", "safety glasses"],
"Fatigue": ["fatigue", "tired", "hours", "shift length", "rest"],
"Communication": ["communication", "briefing", "toolbox", "handover", "instruction"],
"Supervision": ["supervision", "supervision", "oversight", "leadership", "monitoring"],
"CRP Compliance": ["crp", "critical risk", "permit", "isolation", "confined space", "work at height"],
}
+378
View File
@@ -0,0 +1,378 @@
"""
data_loader.py — Load and normalise the three SHEQ data sources.
Each loader returns a pandas DataFrame with normalised column names
(defined in config.py) so that downstream analysis code is insulated
from changes to the source file schema.
Public API
----------
load_events(filepath) -> pd.DataFrame
load_safety_energy(filepath) -> pd.DataFrame
load_llc_data(filepath) -> pd.DataFrame
load_all(events_path, se_path, llc_path) -> dict[str, pd.DataFrame]
"""
from __future__ import annotations
import logging
import warnings
from pathlib import Path
from typing import Optional
import pandas as pd
from config import (
EVENTS_COL_MAP,
SE_COL_MAP,
LLC_COL_MAP,
MODULE_TYPE_LABELS,
EVENTS_FILE,
SAFETY_ENERGY_FILE,
LLC_FILE,
)
log = logging.getLogger(__name__)
# Suppress openpyxl "no default style" warnings
warnings.filterwarnings("ignore", category=UserWarning, module="openpyxl")
# ─────────────────────────────────────────────────────────────────────────────
# Internal helpers
# ─────────────────────────────────────────────────────────────────────────────
def _resolve_col(df: pd.DataFrame, candidates: list[str], key: str) -> Optional[str]:
"""Return the first candidate column that exists in df, or None."""
for c in candidates:
if c in df.columns:
return c
log.debug("Column key '%s' not found (tried: %s)", key, candidates)
return None
def _parse_dates(series: pd.Series) -> pd.Series:
"""
Parse a date series that may contain:
- ISO strings "2024-01-15"
- Long-form strings "Monday, 15 January 2024"
- Excel datetime objects
Returns a tz-naive datetime64 series; unparseable values become NaT.
"""
if pd.api.types.is_datetime64_any_dtype(series):
return series.dt.tz_localize(None) if series.dt.tz is not None else series
def _parse_one(val):
if pd.isna(val):
return pd.NaT
s = str(val).strip()
# Strip leading day-of-week "Monday, " prefix from long-form dates
if "," in s and len(s.split(",")[0].split()) == 1:
s = s.split(",", 1)[1].strip()
try:
return pd.to_datetime(s, dayfirst=True)
except Exception:
return pd.NaT
return series.map(_parse_one)
def _remap(df: pd.DataFrame, col_map: dict[str, list[str]]) -> pd.DataFrame:
"""
Build a new DataFrame with normalised column names.
For each key in col_map, find the first matching source column and
rename it. Columns not mentioned in col_map are dropped. The
original source columns are preserved under their original names as
well, allowing callers to access additional fields if needed.
"""
# Keep all original columns; add normalised aliases
result = df.copy()
for norm_name, candidates in col_map.items():
src = _resolve_col(df, candidates, norm_name)
if src is not None and norm_name not in df.columns:
result[norm_name] = df[src]
elif src is not None:
result[norm_name] = df[src]
return result
def _null_rate(series: pd.Series) -> float:
"""Return fraction of null / empty values (01)."""
return series.isna().mean()
def _profile(df: pd.DataFrame, label: str) -> dict:
"""Return a simple quality profile dict for logging."""
return {
"source": label,
"rows": len(df),
"cols": len(df.columns),
"date_nulls": _null_rate(df.get("date", pd.Series(dtype="object"))),
}
# ─────────────────────────────────────────────────────────────────────────────
# Events loader
# ─────────────────────────────────────────────────────────────────────────────
def load_events(filepath: str = EVENTS_FILE) -> pd.DataFrame:
"""
Load Events.xlsx and return a normalised DataFrame.
Normalised columns (see EVENTS_COL_MAP):
date, event_type, consequence, status, business_unit, project,
location, crp, root_cause_cat, root_cause_sub, injury_class,
body_part, brief_desc, event_desc, days_to_enter, event_lag,
report_lag, investigation_done, hipo, critical_event
Also adds:
year, month, year_month (Period[M])
"""
path = Path(filepath)
if not path.exists():
raise FileNotFoundError(f"Events file not found: {filepath}")
log.info("Loading Events from %s", filepath)
raw = pd.read_excel(filepath)
log.info(" Raw shape: %s rows × %s cols", *raw.shape)
df = _remap(raw, EVENTS_COL_MAP)
# Parse dates
df["date"] = _parse_dates(df["date"])
# Drop rows with no date
n_before = len(df)
df = df.dropna(subset=["date"]).copy()
if len(df) < n_before:
log.warning(" Dropped %d rows with missing date", n_before - len(df))
# Derived time fields
df["year"] = df["date"].dt.year
df["month"] = df["date"].dt.month
df["year_month"] = df["date"].dt.to_period("M")
df["dow"] = df["date"].dt.day_name()
# Normalise text fields
for col in ("event_type", "consequence", "business_unit", "project",
"root_cause_cat", "injury_class"):
if col in df.columns:
df[col] = df[col].astype(str).str.strip()
df[col] = df[col].replace({"nan": pd.NA, "None": pd.NA, "": pd.NA})
profile = _profile(df, "Events")
log.info(" Loaded %d events | BUs: %s",
profile["rows"],
list(df["business_unit"].dropna().unique()) if "business_unit" in df else "?")
return df
# ─────────────────────────────────────────────────────────────────────────────
# Safety Energy loader
# ─────────────────────────────────────────────────────────────────────────────
def load_safety_energy(filepath: str = SAFETY_ENERGY_FILE) -> pd.DataFrame:
"""
Load Safety_Energy.xlsx and return a normalised DataFrame.
Safety Energy is the combined analytical domain covering all leading
activity types: LLC (Leader Learning Conversations), CCC (Critical
Control Checks), and OCC (Operational Control Checks).
Normalised columns (see SE_COL_MAP):
date, module_name, module_prefix, module_type, activity_type
(short label: LLC/CCC/OCC), leader, business_unit, project,
location, at_risk_aspects, total_questions, actions, atl_actions,
at_risk_crp, llc_topic, at_risk_obs, positive_obs, participants
Also adds:
year, month, year_month (Period[M])
activity_type — shortened label from MODULE_TYPE_LABELS
"""
path = Path(filepath)
if not path.exists():
raise FileNotFoundError(f"Safety Energy file not found: {filepath}")
log.info("Loading Safety Energy from %s", filepath)
raw = pd.read_excel(filepath)
log.info(" Raw shape: %s rows × %s cols", *raw.shape)
df = _remap(raw, SE_COL_MAP)
df["date"] = _parse_dates(df["date"])
n_before = len(df)
df = df.dropna(subset=["date"]).copy()
if len(df) < n_before:
log.warning(" Dropped %d rows with missing date", n_before - len(df))
# Shorten module_type to LLC / CCC / OCC label
df["activity_type"] = (
df["module_type"]
.map(MODULE_TYPE_LABELS)
.fillna(df.get("module_type", pd.Series(dtype="str")))
)
# Derived time fields
df["year"] = df["date"].dt.year
df["month"] = df["date"].dt.month
df["year_month"] = df["date"].dt.to_period("M")
# Normalise text
for col in ("business_unit", "project", "leader", "activity_type"):
if col in df.columns:
df[col] = df[col].astype(str).str.strip()
df[col] = df[col].replace({"nan": pd.NA, "None": pd.NA, "": pd.NA})
# Numeric fields — coerce to numeric safely
for col in ("at_risk_aspects", "total_questions", "actions", "atl_actions"):
if col in df.columns:
df[col] = pd.to_numeric(df[col], errors="coerce")
log.info(" Loaded %d activities | types: %s",
len(df),
df["activity_type"].value_counts().to_dict() if "activity_type" in df else "?")
return df
# ─────────────────────────────────────────────────────────────────────────────
# LLC Data loader
# ─────────────────────────────────────────────────────────────────────────────
def load_llc_data(filepath: str = LLC_FILE) -> pd.DataFrame:
"""
Load LLC_Data.xlsx and return a normalised DataFrame.
LLC_Data is a supplementary export of Leader Learning Conversations,
often containing richer free-text fields (topic, at-risk observations,
review & action notes) than the Safety_Energy export.
Normalised columns (see LLC_COL_MAP):
date, topic, leader, business_unit, project, location,
crp_focus, at_risk_obs, positive_obs, at_risk_flag, participants
Also adds:
year, month, year_month (Period[M])
"""
path = Path(filepath)
if not path.exists():
raise FileNotFoundError(f"LLC Data file not found: {filepath}")
log.info("Loading LLC Data from %s", filepath)
raw = pd.read_excel(filepath)
log.info(" Raw shape: %s rows × %s cols", *raw.shape)
df = _remap(raw, LLC_COL_MAP)
df["date"] = _parse_dates(df["date"])
n_before = len(df)
df = df.dropna(subset=["date"]).copy()
if len(df) < n_before:
log.warning(" Dropped %d rows with missing date", n_before - len(df))
df["year"] = df["date"].dt.year
df["month"] = df["date"].dt.month
df["year_month"] = df["date"].dt.to_period("M")
for col in ("business_unit", "project", "leader", "topic", "crp_focus"):
if col in df.columns:
df[col] = df[col].astype(str).str.strip()
df[col] = df[col].replace({"nan": pd.NA, "None": pd.NA, "": pd.NA})
# at_risk_flag is a count field in this export
if "at_risk_flag" in df.columns:
df["at_risk_flag"] = pd.to_numeric(df["at_risk_flag"], errors="coerce")
log.info(" Loaded %d LLC records | BUs: %s",
len(df),
list(df["business_unit"].dropna().unique()) if "business_unit" in df else "?")
return df
# ─────────────────────────────────────────────────────────────────────────────
# Combined loader
# ─────────────────────────────────────────────────────────────────────────────
def load_all(
events_path: str = EVENTS_FILE,
se_path: str = SAFETY_ENERGY_FILE,
llc_path: str = LLC_FILE,
) -> dict[str, pd.DataFrame]:
"""
Load all three data sources and return a dict with keys:
'events' -> normalised Events DataFrame
'safety_energy' -> normalised Safety Energy DataFrame
'llc' -> normalised LLC Data DataFrame
Raises FileNotFoundError with a descriptive message if any file
is missing.
"""
return {
"events": load_events(events_path),
"safety_energy": load_safety_energy(se_path),
"llc": load_llc_data(llc_path),
}
# ─────────────────────────────────────────────────────────────────────────────
# Backwards-compatibility shim for old analysis.py
# ─────────────────────────────────────────────────────────────────────────────
def load_and_prepare(filepath: str, start_date: str, split_date: str) -> pd.DataFrame:
"""
Backwards-compatible wrapper used by the old analysis.py module.
Returns Events data filtered to start_date onwards, with a 'PD'
column (pd1 / pd2) based on split_date.
"""
df = load_events(filepath)
# Rename normalised columns back to legacy names for old analysis.py
rename_map = {
"date": "Event Date",
"event_type": "Event Type",
"consequence": "Actual Consequence",
"crp": "CRPInvolved",
"root_cause_cat":"Root Cause Category",
"injury_class": "Ventia Injury Classification",
"body_part": "Bodily Location",
}
df = df.rename(columns={k: v for k, v in rename_map.items() if k in df.columns})
# Handle missing columns that old code expects
if "Days to Investigate" not in df.columns:
df["Days to Investigate"] = df.get("event_lag", pd.Series(dtype="float64"))
if "Days to Close" not in df.columns:
df["Days to Close"] = pd.to_numeric(
pd.to_datetime(df.get("ClosedAtDate"), errors="coerce")
.sub(df["Event Date"])
.dt.days,
errors="coerce",
)
if "CRPInvolved" not in df.columns:
df["CRPInvolved"] = df.get("CRP Involved", pd.NA)
df = df[df["Event Date"] >= pd.Timestamp(start_date)].copy()
df["Year"] = df["Event Date"].dt.year
df["Month"] = df["Event Date"].dt.month
df["MonthName"] = df["Event Date"].dt.strftime("%b")
df["DOW"] = df["Event Date"].dt.day_name()
df["YearMonth"] = df["Event Date"].dt.to_period("M")
df["PD"] = df["Event Date"].apply(
lambda x: "pd1" if x < pd.Timestamp(split_date) else "pd2"
)
return df
def get_body_parts(series: pd.Series) -> pd.Series:
"""Split multi-value body part entries and normalise (legacy helper)."""
parts = []
for val in series.dropna():
for part in str(val).split(","):
part = part.strip()
if part and "unspecified" not in part.lower():
parts.append(part)
return pd.Series(parts)
Binary file not shown.
Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Some files were not shown because too many files have changed in this diff Show More