Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]
Messages
Services
Plugins
Recent questions tagged autoware_traffic_light_multi_camera_fusion at Robotics Stack Exchange
Package Summary
| Tags | No category tags. |
| Version | 0.48.0 |
| License | Apache License 2.0 |
| Build type | AMENT_CMAKE |
| Use | RECOMMENDED |
Repository Summary
| Description | |
| Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
| VCS Type | git |
| VCS Version | main |
| Last Updated | 2025-12-03 |
| Dev Status | UNKNOWN |
| Released | UNRELEASED |
| Tags | planner ros calibration self-driving-car autonomous-driving autonomous-vehicles ros2 3d-map autoware |
| Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Package Description
Additional Links
Maintainers
- Tao Zhong
- Masato Saeki
- Yoshi Ri
- Taekjin Lee
Authors
autoware_traffic_light_multi_camera_fusion
Overview
This node fuses traffic light recognition results from multiple cameras to produce a single, reliable traffic light state. By integrating information from different viewpoints and ROIs, it ensures robust performance even in challenging scenarios, such as partial occlusions or recognition errors from an individual camera.
graph LR
subgraph "Multi Camera Feeds"
direction TB
Cam1[" <br> <b>Camera 1</b> <br> State: GREEN <br> Confidence: 0.95"]
Cam2[" <br> <b>Camera 2</b> <br> State: GREEN <br> Confidence: 0.94"]
Cam3[" <br> <b>Camera 3</b> <br> State: RED <br> Confidence: 0.95"]
end
subgraph "Processing"
direction TB
Fusion["<b>Multi-Camera Fusion Node</b> <br><i>Fuses evidence using <br> Bayesian updating</i>"]
end
subgraph "Unified & Robust State"
direction TB
Result[" <br> <b>Final State: GREEN</b>"]
end
Cam1 --> Fusion
Cam2 --> Fusion
Cam3 --> Fusion
Fusion --> Result
style Fusion fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:#004d40
style Result fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20
How It Works
The fusion algorithm operates in two main stages.
graph TD
subgraph "Input: Multiple Camera Results"
A["Camera 1<br>Recognition Result"]
B["Camera 2<br>Recognition Result"]
C["..."]
end
subgraph "Stage 1: Per-Camera Fusion"
D{"Best ROIs Selection<br><br>For each ROI,<br>select the single most<br>reliable detection result."}
end
E["Best Detection per ROIs"]
subgraph "Stage 2: Group Fusion"
F{"Group Consensus<br><br>Fuse all 'best detections'<br>into a single state for<br>the entire traffic light group<br>using Bayesian updating."}
end
subgraph "Final Output"
G["Final Group State<br>(e.g., GREEN)"]
end
A --> D
B --> D
C --> D
D --> E
E --> F
F --> G
style D fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style F fill:#e0f7fa,stroke:#00796b,stroke-width:2px,color:black
style E fill:#fff,stroke:#333,stroke-width:2px,stroke-dasharray: 5 5,color:black
style G fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:black
Stage 1: Best View Selection (Per-Camera Fusion)
First, for each individual ROIs, the node selects the single most reliable detection—the “best shot”—from all available camera views.
This selection is based on a strict priority queue:
- Latest Timestamp: Detections with the most recent timestamp are prioritized for the same sensor.
- Known State: Results with a known color (Red, Green, etc.) are prioritized over ‘Unknown’.
- Full Visibility: Detections from non-truncated ROIs (fully visible ROIs) are prioritized.
- Highest Confidence: The result with the highest detection confidence score is prioritized.
This process yields the single most plausible recognition for every ROIs.
Stage 2: Group Consensus (Bayesian Fusion)
Next, the “best shot” detections from Stage 1 are fused to determine a single, coherent state for the entire traffic light group. Instead of simple voting or averaging, this node employs a more principled method: Bayesian updating.
- Belief Score: Each color (Red, Green, Yellow) maintains a “belief score” represented in log-odds for numerical stability and ease of updating.
- Evidence Update: Each selected detection from Stage 1 is treated as a piece of “evidence.” Its confidence score is converted into a log-odds value representing the strength of that evidence.
- Score Accumulation: This evidence is added to the corresponding color’s belief score.
- Final Decision: After accumulating all evidence, the color with the highest final score is chosen as the definitive state for the group.
Input topics
For every camera, the following three topics are subscribed:
File truncated at 100 lines see the full file
Changelog for package autoware_traffic_light_multi_camera_fusion
0.48.0 (2025-11-18)
-
Merge remote-tracking branch 'origin/main' into humble
-
fix(traffic_light_camera_fusion): change group fusion algorithm (#11297)
- fix(traffic_light_camera_fusion): change group fusion algorithm
- style(pre-commit): autofix
- fix: potential array access violation
- fix: validate func
- feat: bayesian update
- doc(traffic_light_camera_fusion): add bayesian method
- chore: adding comments to variables and functions
- doc: make simple, add figure
- doc: fix github style
- doc: fix mermaid error
- style(pre-commit): autofix
- chore: add param prior_log_odds
- fix: modified summation function
- feat: support color and shape
- style(pre-commit): autofix
- doc: update param schema
- fix: bayesian estimation
- style(pre-commit): autofix
- fix: build error
- fix: code health
- fix: code complex
- fix: complex branch
- style(pre-commit): autofix
* modify docs ---------Co-authored-by: pre-commit-ci-lite[bot] <117423508+pre-commit-ci-lite[bot]@users.noreply.github.com> Co-authored-by: Shumpei Wakabayashi <<42209144+shmpwk@users.noreply.github.com>> Co-authored-by: Yuxuan Liu <<619684051@qq.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>>
-
refactor(autoware_traffic_light_multi_camera_fusion): split utils and add test (#10360)
- init
- chore
- style(pre-commit): autofix
- add remained test
- add include file
- refactor
- move variable from cpp to hpp
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Contributors: Masato Saeki, Ryohsuke Mitsudome, toki-1441
0.47.1 (2025-08-14)
0.47.0 (2025-08-11)
0.46.0 (2025-06-20)
0.45.0 (2025-05-22)
- Merge remote-tracking branch 'origin/main' into tmp/notbot/bump_version_base
- chore: update traffic light packages code owner (#10644) chore: add Taekjin Lee as maintainer to multiple perception packages
- Contributors: Taekjin LEE, TaikiYamada4
0.44.2 (2025-06-10)
0.44.1 (2025-05-01)
0.44.0 (2025-04-18)
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
-
Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
-
chore: refine maintainer list (#10110)
File truncated at 100 lines see the full file
Package Dependencies
System Dependencies
Dependant Packages
| Name | Deps |
|---|---|
| tier4_perception_launch |
Launch files
- launch/traffic_light_multi_camera_fusion.launch.xml
-
- input/vector_map [default: /map/vector_map]
- param_path [default: $(find-pkg-share autoware_traffic_light_multi_camera_fusion)/config/traffic_light_multi_camera_fusion.param.yaml]
- output/traffic_signals [default: /perception/traffic_light_recognition/traffic_signals]
- camera_namespaces [default: [camera6, camera7]]