-
Notifications
You must be signed in to change notification settings - Fork 42
RHINENG-21214: split system_platform table to reflect cyndi changes #1988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Referenced Jiras: |
Reviewer's GuideSplit the monolithic system_platform storage into a new system_inventory table for host metadata and a system_patch table for patch/advisory state, wire them together via a backward‑compatible system_platform view with INSTEAD OF triggers, and propagate the schema change through migrations, Go models/DAO logic, tests, and dev data so existing APIs continue to behave while aligning with cyndi-backed inventory.hosts. Sequence diagram for listener upload writing to system_inventory and system_patchsequenceDiagram
actor Host
participant Listener as listener_service
participant DB as postgres
Host->>Listener: upload host data
Listener->>Listener: updateSystemPlatform(accountID, host)
Listener->>Listener: build SystemPlatform model
Listener->>Listener: storeOrUpdateSysPlatform(tx, system, colsToUpdate)
alt upsert system_inventory
Listener->>DB: INSERT system_inventory
activate DB
DB-->>Listener: inserted row with id, inventory_id, rh_account_id, unchanged_since
deactivate DB
end
Listener->>Listener: update SystemPlatform.ID from SystemInventory.ID
alt upsert system_patch
Listener->>DB: INSERT system_patch ON CONFLICT DO UPDATE template_id
activate DB
DB-->>Listener: row inserted or updated (last_evaluation returned)
deactivate DB
end
Listener->>Listener: set SystemPlatform.LastEvaluation from SystemPatch
Listener-->>Host: response with updated system state
ER diagram for split system_platform into system_inventory and system_patcherDiagram
rh_account {
int id PK
string name
}
reporter {
int id PK
}
template {
int id PK
int rh_account_id FK
}
system_inventory {
bigint id PK
uuid inventory_id
int rh_account_id FK
text vmaas_json
text json_checksum
timestamptz last_updated
timestamptz unchanged_since
timestamptz last_upload
boolean stale
text display_name
int reporter_id FK
jsonb yum_updates
text yum_checksum
boolean satellite_managed
boolean built_pkgcache
text arch
boolean bootc
jsonb tags
timestamptz created
text[] workspaces
timestamptz stale_timestamp
timestamptz stale_warning_timestamp
timestamptz culled_timestamp
text os_name
smallint os_major
smallint os_minor
text rhsm_version
uuid subscription_manager_id
boolean sap_workload
text[] sap_workload_sids
boolean ansible_workload
text ansible_workload_controller_version
boolean mssql_workload
text mssql_workload_version
}
system_patch {
bigint system_id PK
int rh_account_id PK
timestamptz last_evaluation
int installable_advisory_count_cache
int installable_advisory_enh_count_cache
int installable_advisory_bug_count_cache
int installable_advisory_sec_count_cache
int packages_installed
int packages_installable
int packages_applicable
boolean third_party
int applicable_advisory_count_cache
int applicable_advisory_enh_count_cache
int applicable_advisory_bug_count_cache
int applicable_advisory_sec_count_cache
bigint template_id FK
}
system_advisories {
int advisory_id FK
int rh_account_id FK
bigint system_id FK
int status_id FK
}
system_repo {
bigint system_id FK
bigint repo_id FK
int rh_account_id FK
}
system_package2 {
bigint system_id FK
int rh_account_id FK
bigint package_id FK
bigint installed_id FK
bigint available_id FK
bigint applicable_id FK
}
advisory_account_data {
int advisory_id PK
int rh_account_id PK
int systems_installable
int systems_applicable
}
rh_account ||--o{ system_inventory : has
rh_account ||--o{ system_patch : has
rh_account ||--o{ advisory_account_data : has
reporter ||--o{ system_inventory : reports
template ||--o{ system_patch : used_by
system_inventory ||--|| system_patch : splits_to
system_inventory ||--o{ system_advisories : referenced_by
system_inventory ||--o{ system_repo : referenced_by
system_inventory ||--o{ system_package2 : referenced_by
system_advisories ||--o{ advisory_account_data : aggregates
Class diagram for Go models SystemPlatform, SystemInventory, and SystemPatchclassDiagram
class SystemPlatform {
int64 ID
string InventoryID
int RhAccountID
*string VmaasJSON
*string JSONChecksum
*time_Time LastUpdated
*time_Time UnchangedSince
*time_Time LastEvaluation
*time_Time LastUpload
*time_Time StaleTimestamp
*time_Time StaleWarningTimestamp
*time_Time CulledTimestamp
bool Stale
string DisplayName
int PackagesInstalled
int PackagesInstallable
*int ReporterID
bool ThirdParty
[]byte YumUpdates
int ApplicableAdvisoryCountCache
int ApplicableAdvisoryEnhCountCache
int ApplicableAdvisoryBugCountCache
int ApplicableAdvisorySecCountCache
bool SatelliteManaged
bool BuiltPkgcache
int PackagesApplicable
*int64 TemplateID
*string YumChecksum
*string Arch
bool Bootc
string GetInventoryID()
}
class SystemInventory {
int64 ID
string InventoryID
int RhAccountID
*string VmaasJSON
*string JSONChecksum
*time_Time LastUpdated
*time_Time UnchangedSince
*time_Time LastUpload
bool Stale
string DisplayName
*int ReporterID
[]byte YumUpdates
*string YumChecksum
bool SatelliteManaged
bool BuiltPkgcache
*string Arch
bool Bootc
[]byte Tags
time_Time Created
pq_StringArray Workspaces
*time_Time StaleTimestamp
*time_Time StaleWarningTimestamp
*time_Time CulledTimestamp
*string OSName
*int16 OSMajor
*int16 OSMinor
*string RhsmVersion
*string SubscriptionManagerID
bool SapWorkload
pq_StringArray SapWorkloadSIDs
bool AnsibleWorkload
*string AnsibleWorkloadControllerVersion
bool MssqlWorkload
*string MssqlWorkloadVersion
string TableName()
string GetInventoryID()
}
class SystemPatch {
int64 SystemID
int RhAccountID
*time_Time LastEvaluation
int InstallableAdvisoryCountCache
int InstallableAdvisoryEnhCountCache
int InstallableAdvisoryBugCountCache
int InstallableAdvisorySecCountCache
int PackagesInstalled
int PackagesInstallable
int PackagesApplicable
bool ThirdParty
int ApplicableAdvisoryCountCache
int ApplicableAdvisoryEnhCountCache
int ApplicableAdvisoryBugCountCache
int ApplicableAdvisorySecCountCache
*int64 TemplateID
string TableName()
}
SystemPlatform ..> SystemInventory : persisted_as
SystemPlatform ..> SystemPatch : persisted_as
SystemInventory "1" -- "1" SystemPatch : split_record
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
f8f6d55 to
36ae60e
Compare
80e28fb to
8622539
Compare
abec31a to
9a9466c
Compare
d2bd52e to
9b5e764
Compare
|
/retest |
2145953 to
8c47b28
Compare
37a758d to
cdd54d4
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1988 +/- ##
==========================================
+ Coverage 59.18% 59.32% +0.14%
==========================================
Files 133 134 +1
Lines 8599 8664 +65
==========================================
+ Hits 5089 5140 +51
- Misses 2967 2977 +10
- Partials 543 547 +4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
a97f71e to
616b5d2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've found 6 issues, and left some high level feedback:
- In
142_split_system_platform.down.sql, theINSERT INTO system_platform SELECTstatement is syntactically and logically broken: there is a missing comma aftersi.id, and the join conditionsi.inventory_id = sp.inventory_idcannot work becausesystem_patchdoes not have aninventory_idcolumn (it should be joining onsp.system_id). - The new
upsertSystemPatchflow inlistener/upload.gorelies onOnConflictUpdateMultiwith an emptyupdateColsslice to trigger aDO NOTHINGupsert; consider making this behavior explicit (e.g. a helper or comment) so future readers do not misinterpret the "no columns" case as a bug and accidentally change the conflict behavior.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `142_split_system_platform.down.sql`, the `INSERT INTO system_platform SELECT` statement is syntactically and logically broken: there is a missing comma after `si.id`, and the join condition `si.inventory_id = sp.inventory_id` cannot work because `system_patch` does not have an `inventory_id` column (it should be joining on `sp.system_id`).
- The new `upsertSystemPatch` flow in `listener/upload.go` relies on `OnConflictUpdateMulti` with an empty `updateCols` slice to trigger a `DO NOTHING` upsert; consider making this behavior explicit (e.g. a helper or comment) so future readers do not misinterpret the "no columns" case as a bug and accidentally change the conflict behavior.
## Individual Comments
### Comment 1
<location> `database_admin/migrations/142_split_system_platform.down.sql:55-64` </location>
<code_context>
+INSERT INTO system_platform SELECT
</code_context>
<issue_to_address>
**issue (bug_risk):** Down migration has a malformed INSERT and joins on a non-existent column in system_patch.
In the `INSERT INTO system_platform SELECT` statement:
- `si.id` is missing a trailing comma before `si.inventory_id`, which will cause a syntax error.
- The join condition incorrectly uses `sp.inventory_id`, but `system_patch` only has `system_id` and `rh_account_id`. The join should likely be `ON si.rh_account_id = sp.rh_account_id AND si.id = sp.system_id`.
As written, the down migration will fail and cannot reconstruct `system_platform` correctly.
</issue_to_address>
### Comment 2
<location> `listener/upload_test.go:351-354` </location>
<code_context>
var nextval, currval int
database.DB.Model(&models.SystemPlatform{}).Select("count(*)").Find(&oldCount)
- database.DB.Raw("select nextval('system_platform_id_seq')").Find(&nextval)
+ database.DB.Raw("select nextval('system_inventory_id_seq')").Find(&nextval)
colsToUpdate := []string{"vmaas_json", "json_checksum", "reporter_id", "satellite_managed"}
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests that explicitly cover the new `upsertSystemPatch` behavior and the split between `system_inventory` and `system_patch`.
The production behavior of `storeOrUpdateSysPlatform` has changed significantly: it now writes to `system_inventory` and then calls `upsertSystemPatch` with different paths depending on `system_patch` existence and `TemplateID`. `TestStoreOrUpdateSysPlatform` only checks sequence gaps/basic upsert and misses:
- Creating a new `system_patch` when none exists and `TemplateID` is non-nil
- Updating an existing `system_patch` when `TemplateID` changes
- Skipping writes (and reloading `last_evaluation`) when a `system_patch` exists and `TemplateID` is nil
- Correctly populating `system.LastEvaluation` from `system_patch`
Please add/extend tests to cover at least these cases:
1) New `SystemPlatform` with `TemplateID` set → `system_patch` row created with expected `template_id` and `last_evaluation`.
2) Existing `system_patch` → call with different `TemplateID` → row updated.
3) Existing `system_patch` → call with `TemplateID == nil` → `template_id` unchanged and `system.LastEvaluation` loaded from `system_patch`.
Suggested implementation:
```golang
var oldCount, newCount int
var nextval, currval int
database.DB.Model(&models.SystemPlatform{}).Select("count(*)").Find(&oldCount)
database.DB.Raw("select nextval('system_inventory_id_seq')").Find(&nextval)
colsToUpdate := []string{"vmaas_json", "json_checksum", "reporter_id", "satellite_managed"}
json := "this_is_json"
// make sure we are not creating gaps in id sequences
database.DB.Model(&models.SystemPlatform{}).Select("count(*)").Find(&newCount)
database.DB.Raw("select currval('system_inventory_id_seq')").Find(&currval)
countInc := newCount - oldCount
maxInc := currval - nextval
assert.Equal(t, countInc, maxInc)
// additional coverage for system_inventory / system_patch behavior
t.Run("new system with TemplateID creates system_patch", func(t *testing.T) {
tx := database.DB.Begin()
defer tx.Rollback()
now := time.Now().UTC()
templateID := uint(1001)
system := &models.SystemPlatform{
// fill required fields as per existing tests/conventions
TemplateID: &templateID,
LastEvaluation: &now,
}
if err := tx.Create(system).Error; err != nil {
t.Fatalf("failed to create system_inventory row: %v", err)
}
// exercise production behavior: write to system_inventory and upsert system_patch
if err := storeOrUpdateSysPlatform(tx, system, colsToUpdate, json); err != nil {
t.Fatalf("storeOrUpdateSysPlatform failed: %v", err)
}
var patch models.SystemPatch
if err := tx.Where("system_id = ?", system.ID).First(&patch).Error; err != nil {
t.Fatalf("expected system_patch row to be created: %v", err)
}
if patch.TemplateID == nil || *patch.TemplateID != templateID {
t.Fatalf("expected system_patch.template_id = %d; got %#v", templateID, patch.TemplateID)
}
if patch.LastEvaluation == nil || patch.LastEvaluation.IsZero() {
t.Fatalf("expected system_patch.last_evaluation to be populated")
}
})
t.Run("existing system_patch updated when TemplateID changes", func(t *testing.T) {
tx := database.DB.Begin()
defer tx.Rollback()
initialTemplateID := uint(2001)
updatedTemplateID := uint(2002)
now := time.Now().UTC()
system := &models.SystemPlatform{
TemplateID: &initialTemplateID,
LastEvaluation: &now,
}
if err := tx.Create(system).Error; err != nil {
t.Fatalf("failed to create system_inventory row: %v", err)
}
// seed existing system_patch for this system
initialPatch := &models.SystemPatch{
SystemID: system.ID,
TemplateID: &initialTemplateID,
LastEvaluation: &now,
}
if err := tx.Create(initialPatch).Error; err != nil {
t.Fatalf("failed to seed system_patch row: %v", err)
}
// change TemplateID and call storeOrUpdateSysPlatform
system.TemplateID = &updatedTemplateID
newEval := now.Add(5 * time.Minute)
system.LastEvaluation = &newEval
if err := storeOrUpdateSysPlatform(tx, system, colsToUpdate, json); err != nil {
t.Fatalf("storeOrUpdateSysPlatform failed: %v", err)
}
var patch models.SystemPatch
if err := tx.Where("system_id = ?", system.ID).First(&patch).Error; err != nil {
t.Fatalf("expected system_patch row to exist: %v", err)
}
if patch.TemplateID == nil || *patch.TemplateID != updatedTemplateID {
t.Fatalf("expected system_patch.template_id to be updated to %d; got %#v", updatedTemplateID, patch.TemplateID)
}
if patch.LastEvaluation == nil || !patch.LastEvaluation.Equal(newEval) {
t.Fatalf("expected system_patch.last_evaluation to be updated to %v; got %#v", newEval, patch.LastEvaluation)
}
})
t.Run("existing system_patch retained when TemplateID is nil and LastEvaluation reloaded", func(t *testing.T) {
tx := database.DB.Begin()
defer tx.Rollback()
templateID := uint(3001)
lastEval := time.Now().UTC().Add(-10 * time.Minute)
system := &models.SystemPlatform{
TemplateID: &templateID,
}
if err := tx.Create(system).Error; err != nil {
t.Fatalf("failed to create system_inventory row: %v", err)
}
patch := &models.SystemPatch{
SystemID: system.ID,
TemplateID: &templateID,
LastEvaluation: &lastEval,
}
if err := tx.Create(patch).Error; err != nil {
t.Fatalf("failed to seed system_patch row: %v", err)
}
// simulate call with TemplateID == nil; production code should:
// - not change template_id in system_patch
// - reload last_evaluation from system_patch into system
system.TemplateID = nil
system.LastEvaluation = nil
if err := storeOrUpdateSysPlatform(tx, system, colsToUpdate, json); err != nil {
t.Fatalf("storeOrUpdateSysPlatform failed: %v", err)
}
var patchAfter models.SystemPatch
if err := tx.Where("system_id = ?", system.ID).First(&patchAfter).Error; err != nil {
t.Fatalf("expected system_patch row to exist: %v", err)
}
if patchAfter.TemplateID == nil || *patchAfter.TemplateID != templateID {
t.Fatalf("expected system_patch.template_id to remain %d; got %#v", templateID, patchAfter.TemplateID)
}
if system.LastEvaluation == nil || !system.LastEvaluation.Equal(lastEval) {
t.Fatalf("expected system.LastEvaluation to be reloaded to %v; got %#v", lastEval, system.LastEvaluation)
}
})
```
These edits assume:
1. `listener/upload_test.go` already imports `time`.
If not, add `time` to the imports for this test file.
2. `models.SystemPlatform` has fields `ID`, `TemplateID *uint`, and `LastEvaluation *time.Time` (or compatible types), and GORM is configured so that this model maps to `system_inventory` / `system_platform` as appropriate.
3. `models.SystemPatch` exists with fields `SystemID`, `TemplateID *uint`, and `LastEvaluation *time.Time`, mapped to the `system_patch` table.
4. `storeOrUpdateSysPlatform` is the function under test and its signature matches `func storeOrUpdateSysPlatform(db *gorm.DB, system *models.SystemPlatform, colsToUpdate []string, json string) error` or similar.
If the actual signature differs (e.g., extra parameters like reporter, org ID, etc.), adjust the calls accordingly using the same arguments as in the existing tests.
If your existing tests use helper constructors/builders for `SystemPlatform` or `SystemPatch` (or require additional mandatory fields like `Account`, `OrgID`, `InventoryID`), you should replace the bare struct literals here with those helpers or populate the required fields to satisfy NOT NULL / unique constraints in your schema.
</issue_to_address>
### Comment 3
<location> `tasks/vmaas_sync/metrics_db_test.go:20-26` </location>
<code_context>
- assert.Equal(t, 214, len(tableSizes))
- assert.Equal(t, 214, len(uniqueTables))
- assert.True(t, uniqueTables["public.system_platform"]) // check whether table names were loaded
+ assert.Equal(t, 231, len(tableSizes))
+ assert.Equal(t, 231, len(uniqueTables))
+ assert.True(t, uniqueTables["public.system_inventory"]) // check whether table names were loaded
+ assert.True(t, uniqueTables["public.system_patch"]) // check whether table names were loaded
assert.True(t, uniqueTables["public.package"])
assert.True(t, uniqueTables["public.repo"])
assert.True(t, uniqueTables["inventory.hosts_v1_0"])
</code_context>
<issue_to_address>
**suggestion (testing):** Consider making the table count assertion less brittle while still verifying the new tables.
Hard-coding `231` couples this test to the exact schema size, so any unrelated table add/remove will require updating the test. Since you already assert the key tables (including the new ones), consider either:
- Keeping the explicit assertions for the important tables, and
- Relaxing the count check to `>=` some lower bound, or removing it if it isn’t providing extra value.
This keeps the test resilient to schema changes while still ensuring the new tables are covered by the metrics logic.
Suggested implementation:
```golang
for _, item := range tableSizes {
uniqueTables[item.Key] = true
}
// Keep a lower-bound to ensure we have a reasonably sized schema, but avoid
// coupling the test to the exact number of tables.
assert.GreaterOrEqual(t, len(tableSizes), 214)
assert.GreaterOrEqual(t, len(uniqueTables), 214)
// Explicitly assert existence of key tables, including newly added ones.
assert.True(t, uniqueTables["public.system_inventory"]) // check whether table names were loaded
assert.True(t, uniqueTables["public.system_patch"]) // check whether table names were loaded
assert.True(t, uniqueTables["public.package"])
assert.True(t, uniqueTables["public.repo"])
assert.True(t, uniqueTables["inventory.hosts_v1_0"])
```
1. Ensure that `github.com/stretchr/testify/assert` is already imported in this test file; if not, add it to the imports so that `assert.GreaterOrEqual` is available.
2. If your version of `testify` is too old to have `GreaterOrEqual`, replace those two lines with:
- `assert.True(t, len(tableSizes) >= 214)`
- `assert.True(t, len(uniqueTables) >= 214)`
</issue_to_address>
### Comment 4
<location> `turnpike/controllers/admin_test.go:21-25` </location>
<code_context>
core.SetupTest(t)
- assert.NoError(t, database.DB.Create(&models.SystemPlatform{
+ assert.NoError(t, database.DB.Create(&models.SystemInventory{
InventoryID: del,
RhAccountID: 1,
DisplayName: del,
+ Tags: []byte("[]"),
}).Error)
utils.TestLoadEnv("conf/manager.env")
</code_context>
<issue_to_address>
**suggestion (testing):** Add an integration-style test to ensure system deletion cleans up both `system_inventory` and `system_patch`.
This change updates the seed to `SystemInventory`, but the test still only covers initialization. With the new `system_inventory`/`system_patch` split, please add a test that creates linked `SystemInventory` and `SystemPatch` rows, runs `SystemDeleteHandler` for that `inventory_id`, and asserts that both records are removed (or otherwise handled as designed). This will verify the delete flow matches the new schema and doesn’t leave orphaned patch data.
</issue_to_address>
### Comment 5
<location> `listener/upload.go:479` </location>
<code_context>
- // update system
- err := tx.Select(colsToUpdate).Updates(system).Error
- return base.WrapFatalDBError(err, "unable to update system_platform")
+ inventoryRecord := models.SystemInventory{
+ ID: system.ID,
+ InventoryID: system.InventoryID,
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the platform→inventory mapping, making the patch upsert branches explicit, and using a named templateID to simplify the new flow and avoid duplication.
You can simplify the new orchestration without changing behavior by:
1. **Extracting the `SystemPlatform` → `SystemInventory` mapping into a helper**
This removes the field‑by‑field duplication and keeps the structs in sync in one place:
```go
// in models or a small helper file close to the structs
func NewSystemInventoryFromPlatform(p *models.SystemPlatform) *models.SystemInventory {
return &models.SystemInventory{
ID: p.ID,
InventoryID: p.InventoryID,
RhAccountID: p.RhAccountID,
DisplayName: p.DisplayName,
LastUpload: p.LastUpload,
SatelliteManaged: p.SatelliteManaged,
BuiltPkgcache: p.BuiltPkgcache,
Arch: p.Arch,
Bootc: p.Bootc,
VmaasJSON: p.VmaasJSON,
JSONChecksum: p.JSONChecksum,
ReporterID: p.ReporterID,
YumUpdates: p.YumUpdates,
YumChecksum: p.YumChecksum,
StaleTimestamp: p.StaleTimestamp,
StaleWarningTimestamp: p.StaleWarningTimestamp,
CulledTimestamp: p.CulledTimestamp,
Tags: []byte("[]"),
}
}
```
Then `storeOrUpdateSysPlatform` becomes:
```go
func storeOrUpdateSysPlatform(tx *gorm.DB, system *models.SystemPlatform, colsToUpdate []string) error {
if err := tx.Where("rh_account_id = ? AND inventory_id = ?", system.RhAccountID, system.InventoryID).
Select("id").
Find(system).Error; err != nil {
utils.LogWarn("err", err, "couldn't find system for update")
}
txi := tx.Clauses(clause.Returning{
Columns: []clause.Column{
{Name: "id"}, {Name: "inventory_id"}, {Name: "rh_account_id"},
{Name: "unchanged_since"},
},
})
inventoryRecord := NewSystemInventoryFromPlatform(system)
if err := database.OnConflictUpdateMulti(txi, []string{"rh_account_id", "inventory_id"}, colsToUpdate...).
Create(inventoryRecord).Error; err != nil {
return base.WrapFatalDBError(err, "unable to insert to system_inventory")
}
system.ID = inventoryRecord.ID
system.InventoryID = inventoryRecord.InventoryID
system.RhAccountID = inventoryRecord.RhAccountID
system.UnchangedSince = inventoryRecord.UnchangedSince
return upsertSystemPatch(tx, system)
}
```
If you later add/remove fields, you only touch `NewSystemInventoryFromPlatform` instead of multiple call sites.
2. **Making `upsertSystemPatch` control flow explicit instead of relying on `RowsAffected` semantics**
The three cases are currently implicit. You can keep the same behavior but make it much easier to follow with explicit branches:
```go
func upsertSystemPatch(tx *gorm.DB, system *models.SystemPlatform) error {
tx = tx.Clauses(clause.Returning{Columns: []clause.Column{{Name: "last_evaluation"}}})
patchRecord := models.SystemPatch{
SystemID: system.ID,
RhAccountID: system.RhAccountID,
TemplateID: system.TemplateID,
}
// Case 1 & 2: create or update when we have a template
if system.TemplateID != nil {
res := database.OnConflictUpdateMulti(tx, []string{"rh_account_id", "system_id"}, "template_id").
Create(&patchRecord)
if res.Error != nil {
return base.WrapFatalDBError(res.Error, "unable to upsert system_patch with template")
}
system.LastEvaluation = patchRecord.LastEvaluation
return nil
}
// Case 3: no template; ensure row exists and load last_evaluation
var existing models.SystemPatch
err := tx.
Where("system_id = ? AND rh_account_id = ?", patchRecord.SystemID, patchRecord.RhAccountID).
First(&existing).Error
if err != nil {
// If the row truly doesn't exist, create it and let RETURNING fill last_evaluation
if errors.Is(err, gorm.ErrRecordNotFound) {
res := database.OnConflictUpdateMulti(tx, []string{"rh_account_id", "system_id"}).
Create(&patchRecord)
if res.Error != nil {
return base.WrapFatalDBError(res.Error, "unable to insert system_patch without template")
}
system.LastEvaluation = patchRecord.LastEvaluation
return nil
}
return base.WrapFatalDBError(err, "unable to load system_patch last_evaluation")
}
system.LastEvaluation = existing.LastEvaluation
return nil
}
```
This removes the need to reason about `RowsAffected == 0` implying “existing row and no update needed”, and makes each case self‑documenting.
3. **Re‑introducing a named `templateID` for clarity and to avoid repeated work**
Keeping the template lookup in a variable makes it obvious that it’s a separate concern and prevents accidental multiple calls later:
```go
templateID := hostTemplate(tx, accountID, host)
systemPlatform := models.SystemPlatform{
InventoryID: inventoryID,
RhAccountID: accountID,
DisplayName: displayName,
VmaasJSON: utils.EmptyToNil(&updatesReqJSONString),
JSONChecksum: utils.EmptyToNil(&jsonChecksum),
LastUpload: host.GetLastUpload(),
StaleTimestamp: host.StaleTimestamp.Time(),
StaleWarningTimestamp: host.StaleWarningTimestamp.Time(),
CulledTimestamp: host.CulledTimestamp.Time(),
ReporterID: getReporterID(host.Reporter),
YumUpdates: yumUpdates.GetRawParsed(),
YumChecksum: utils.EmptyToNil(&yumChecksum),
SatelliteManaged: host.SystemProfile.SatelliteManaged,
BuiltPkgcache: yumUpdates.GetBuiltPkgcache(),
Arch: host.SystemProfile.Arch,
Bootc: isBootc,
TemplateID: templateID,
}
```
These changes keep the new inventory/patch split and behavior but reduce duplication and make the control flow around patch upserts much easier to reason about.
</issue_to_address>
### Comment 6
<location> `base/models/models.go:101` </location>
<code_context>
return s.InventoryID
}
+type SystemInventory struct {
+ ID int64 `gorm:"primaryKey"`
+ InventoryID string `gorm:"unique"`
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting shared struct fields and adding small helper abstractions so these models are easier to understand and maintain without changing the database schema.
You can reduce the structural duplication and mental overhead without changing behavior or schema by introducing small, shared abstractions and helpers.
### 1. Extract shared fields between `SystemPlatform` and `SystemInventory`
Instead of duplicating the common “shape” in two structs, factor out the non–PK-related fields into a shared struct and embed it.
```go
// shared_fields.go
// SystemCommon holds fields shared by SystemPlatform and SystemInventory.
// No PK / uniqueness / table-specific tags here.
type SystemCommon struct {
VmaasJSON *string
JSONChecksum *string
LastUpdated *time.Time `gorm:"default:null"`
UnchangedSince *time.Time `gorm:"default:null"`
LastUpload *time.Time `gorm:"default:null"`
Stale bool
DisplayName string
ReporterID *int
YumUpdates []byte `gorm:"column:yum_updates"`
YumChecksum *string `gorm:"column:yum_checksum"`
SatelliteManaged bool `gorm:"column:satellite_managed"`
BuiltPkgcache bool `gorm:"column:built_pkgcache"`
Arch *string
Bootc bool
Tags []byte `gorm:"column:tags"`
Created time.Time
Workspaces pq.StringArray `gorm:"type:text[]"`
StaleTimestamp *time.Time
StaleWarningTimestamp *time.Time
CulledTimestamp *time.Time
OSName *string
OSMajor *int16
OSMinor *int16
RhsmVersion *string
SubscriptionManagerID *string
SapWorkload bool
SapWorkloadSIDs pq.StringArray `gorm:"type:text[];column:sap_workload_sids"`
AnsibleWorkload bool
AnsibleWorkloadControllerVersion *string
MssqlWorkload bool
MssqlWorkloadVersion *string
}
```
Then use it in both models:
```go
type SystemPlatform struct {
// existing PK / table-specific fields...
ID int64 `gorm:"primaryKey"`
InventoryID string `gorm:"unique"`
RhAccountID int `gorm:"primaryKey"`
SystemCommon
}
type SystemInventory struct {
ID int64 `gorm:"primaryKey"`
InventoryID string `gorm:"unique"`
RhAccountID int `gorm:"primaryKey"`
SystemCommon
}
```
You can centralize mapping logic for any copy operations (e.g. in `listener/upload.go`) so future changes to shared fields are localized:
```go
func NewSystemInventoryFromPlatform(p *SystemPlatform) *SystemInventory {
if p == nil {
return nil
}
return &SystemInventory{
ID: p.ID,
InventoryID: p.InventoryID,
RhAccountID: p.RhAccountID,
SystemCommon: p.SystemCommon,
}
}
```
This keeps the DB schema and behavior intact while removing duplication and risk of divergence.
---
### 2. Add structure / documentation to `SystemPatch` without changing schema
You can reduce the “flat bag of counters” feel by:
1. Adding doc comments that clarify logical grouping and relationship.
2. Exposing small helper types/methods to group related counters, without touching the fields (so no schema impact).
```go
// SystemPatch contains per-system patch/evaluation state.
// It is logically grouped into installable/applicable advisory counts
// and package counts, but stored flat for DB compatibility.
type SystemPatch struct {
SystemID int64 `gorm:"primaryKey"`
RhAccountID int `gorm:"primaryKey"`
// Evaluation metadata
LastEvaluation *time.Time `gorm:"default:null"` // set by DB trigger
// Installable advisory counts
InstallableAdvisoryCountCache int
InstallableAdvisoryEnhCountCache int
InstallableAdvisoryBugCountCache int
InstallableAdvisorySecCountCache int
// Package counts
PackagesInstalled int
PackagesInstallable int
PackagesApplicable int
ThirdParty bool
// Applicable advisory counts
ApplicableAdvisoryCountCache int
ApplicableAdvisoryEnhCountCache int
ApplicableAdvisoryBugCountCache int
ApplicableAdvisorySecCountCache int
TemplateID *int64 `gorm:"column:template_id"`
}
// Logical view over advisory counts to avoid callers dealing with all 8 fields.
type AdvisoryCounts struct {
Total int
Enh int
Bug int
Sec int
}
func (p SystemPatch) InstallableCounts() AdvisoryCounts {
return AdvisoryCounts{
Total: p.InstallableAdvisoryCountCache,
Enh: p.InstallableAdvisoryEnhCountCache,
Bug: p.InstallableAdvisoryBugCountCache,
Sec: p.InstallableAdvisorySecCountCache,
}
}
func (p SystemPatch) ApplicableCounts() AdvisoryCounts {
return AdvisoryCounts{
Total: p.ApplicableAdvisoryCountCache,
Enh: p.ApplicableAdvisoryEnhCountCache,
Bug: p.ApplicableAdvisoryBugCountCache,
Sec: p.ApplicableAdvisorySecCountCache,
}
}
```
Callers that previously read individual fields can now use `InstallableCounts()` / `ApplicableCounts()` for a clearer, grouped abstraction, while the raw fields stay available and unchanged for GORM and existing code paths.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
616b5d2 to
98b819d
Compare
98b819d to
89d4b42
Compare
Update tables that refered to system_platform to use system_inventory instead.
system_patch table with 16 partitions has been added
89d4b42 to
b3c6a45
Compare
Secure Coding Practices Checklist GitHub Link
Secure Coding Checklist
Summary by Sourcery
Split the monolithic system_platform table into separate system_inventory and system_patch structures, keeping system_platform as a view backed by INSTEAD OF triggers and updating application code to use the new schema.
New Features:
Bug Fixes:
Enhancements: