SMOdatabroker

Table of Contents

1. Current status

1.1. need tenant id and intersection id and region to perform mutation

1.2. tenant is derived from port

2. Ongoing

2.1. TODO Set up meeting with bboyd (and others) to nail down device ERD

2.1.1. @brian

2.1.2. RSU

  1. has location
  2. N:M relationship in between RSU and Intersection

2.1.3. Intersection

  1. has location

2.1.4. TSC

  1. N:M relationship between TSC and intersection
  2. No location

2.1.5. SCM

  1. Related to tsc, optional 1:N
  2. No location

2.1.6. POE

  1. No location
  2. Related to:
    1. rsu N:M
    2. intersection N:M

2.2. TODO Database

2.2.1. Update rsu table to be a “device table”, not just RSU

Will support RSU, TSC, SCM, POE devices

  1. (PK) tenant
  2. (PK) deviceip
  3. (PK) devicetype
    1. RSU
    2. TSC
    3. SCM
    4. POE
  4. status
  5. location

2.2.2. Populate the rsuintersection N:M table

  1. Derived from maximo or from ryota’s spreadsheet?

2.3. TODO IaC

2.3.1. Give container access to rds-aurora

2.4. TODO code

2.4.1. Replicate logic for

  1. Save incoming status
  2. Trigger GQL mutation

3. Deprecated notes

3.1. Connecting to DS&A

3.1.1. Use VPC peering

Which is already in place, and can be used now. We can access 10.140.0.0/23 or 10.2.0.0/18 seamlessly from CIRRUS Private Subnet. (not CIRRUS RSU Access), as per:

3.2. 2248: replace data from SNS topics

3.2.1. Architecture decisions

In order to ingest from MSK we have file evaluated options, four of which are possible:

  1. Trigger Cloud lambda from DS&A MSK

    This is not possible, AWS does not support it.

  2. DS&A Lambda

    Set up a new lambda in the DS&A account with a Trigger lambda from DS&A MSK; the new lambda will then invoke the lambda on the Cloud side. This entails building lambdas on the DS&A side, and those lambdas would invoke a lambda on the Cloud side, but in the appropriate environment. The overhead of maintaining the lambda on the DS&A side is unappealing.

  3. Pure Kafka trigger

    AWS lambda’s do have the ability to trigger from “pure” Kafka (ie not MSK), and it does look like it’s possible to do this cross account (we have not proved this out).

  4. Poll from container

    On the Cloud side, we set up a persistent container that polls MSK (cross account works here, this has been proved out).

    Due to the lambda being on a cron, this will introduce additional latency up to the length of time that the cron interval is.

  5. Poll from lambda

    Similar to the poll from container option, but in a lambda that is on a cron trigger. Because it’s a very light weight operation with low volume most of the time, with erratic spikes, it is well suited to be a lambda.

    Due to the lambda being on a cron, this will introduce additional latency up to the length of time that the cron interval is.

    Kevin estimates this option as the most expensive.

  6. Subscribe from container

    An EC2 container is fairly heavy and not needed for some of our applications, the persistent behavior allows us to subscribe a Kafka client

    1. Fargate

      On the Cloud side, we set up a persistent container that uses a kafka client to subscription to MSK (cross account works here, this needs to be proved out). This has significantly less IaC than EC2, and Kevin estimates this to be slightly more expensive than EC2.

      Because this is a smaller lift for close to the cheapest cost without introducing additional latency this is the solution we believe to be the best option at this time.

    2. EC2

      An EC2 container is fairly heavy and not needed for some of our applications. This is one that will likely not

      benefit from a container (due to the lower volume). Likely cheapest

3.3. ELSE

3.3.1. similar things to traffic events

3.4. ALL

3.4.1. alert and status devicealertsv1

3.4.2. how do we create consumer group ids, they must be unique

teamapplicationv1

3.4.3. look into aiokafka

3.4.4. single user for all the kafka streams

3.4.5. do we transfer the pinger?

3.4.6. only the rsu-alerts service is being deprecated

4. Kafka details

4.1. we have up to 12 partitions

4.2. unique consumer ids retrieve a unique stream

4.3. can do by time

4.4. can do consumer seek

Author: Ashton Honnecke

Created: 2023-09-01 Fri 12:50