Skip to content

Conversation

@ahmedabu98
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ahmedabu98, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a robust Iceberg Change Data Capture (CDC) streaming source for Apache Beam. It allows Beam pipelines to consume a continuous stream of changes from an Iceberg table, including inserts, deletes, and updates. The implementation intelligently handles different types of change events, processing straightforward additions or removals directly, while employing a sophisticated reconciliation step for complex update scenarios. This significantly expands Beam's capabilities for real-time data integration with Iceberg.

Highlights

  • New CDC Streaming Source: Introduced a comprehensive Change Data Capture (CDC) streaming source for Iceberg tables, enabling continuous processing of data modifications (inserts, deletes, updates) within Apache Beam pipelines.
  • Changelog Task Processing: Implemented a mechanism to scan incremental Iceberg snapshots and categorize changelog tasks into 'uniform' (all inserts or all deletes) and 'mixed' (both inserts and deletes for the same record) for optimized processing.
  • Data Reconciliation for Updates: Developed a reconciliation process for 'mixed' changelog tasks, using co-grouping by record ID to accurately identify and emit 'UPDATE_BEFORE' and 'UPDATE_AFTER' records, alongside pure inserts and deletes.
  • Enhanced Read Utilities: Extended ReadUtils with new methods and classes (DeleteReader, BeamDeleteFilter, BeamDeleteReader) to support reading and filtering based on Iceberg delete files, crucial for CDC functionality.
  • Serialization of Iceberg Metadata: Added new serializable classes (SerializableDeleteFile, SerializableChangelogTask, ChangelogDescriptor) to efficiently transfer Iceberg metadata and task information across Beam workers.
  • Increased Component Visibility: Adjusted the visibility of key classes and methods in IncrementalScanSource and SerializableDataFile to public or protected, facilitating their reuse and extension within the new CDC framework.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ahmedabu98
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: an Iceberg CDC (Change Data Capture) streaming source. The overall design is solid, leveraging Beam's splittable DoFns and CoGroupByKey for reconciling changes. The code is well-structured into different components for scanning, reading, and reconciling changelogs.

I've identified a few areas for improvement, mainly around cleaning up TODO comments and debugging statements. My specific comments are below.

There is one critical issue I couldn't comment on directly due to tooling limitations: in SerializableDataFile.java, the equals and hashCode methods have not been updated to include the newly added fields (dataSequenceNumber, fileSequenceNumber, firstRowId). This violates the Object contract and can lead to subtle bugs. Please ensure this is fixed.

Comment on lines +293 to +316
// TODO: remove this (unused)
@SuppressWarnings("method.invocation")
public BeamDeleteFilter(
FileIO io,
SerializableChangelogTask scanTask,
Schema tableSchema,
Schema projectedSchema,
List<DeleteFile> deleteFiles) {
super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema);
this.io = io;
this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct());
}

// TODO: remove this (unused)
@SuppressWarnings("method.invocation")
public BeamDeleteFilter(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) {
super(
scanTask.file().location(),
deleteFiles,
scanTask.spec().schema(),
scanTask.spec().schema());
this.io = io;
this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These constructors are marked as unused with a TODO to remove them. To improve code clarity and maintainability, they should be removed.

Comment on lines +345 to +368
// TODO: remove this (unused)
@SuppressWarnings("method.invocation")
public BeamDeleteReader(
FileIO io,
SerializableChangelogTask scanTask,
Schema tableSchema,
Schema projectedSchema,
List<DeleteFile> deleteFiles) {
super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema);
this.io = io;
this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct());
}

// TODO: remove this (unused)
@SuppressWarnings("method.invocation")
public BeamDeleteReader(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) {
super(
scanTask.file().location(),
deleteFiles,
scanTask.spec().schema(),
scanTask.spec().schema());
this.io = io;
this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These constructors are marked as unused with a TODO to remove them. To improve code clarity and maintainability, they should be removed.

Comment on lines +78 to +79
// TODO: use TableCache here
Table table = scanConfig.getTable();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata, which can be expensive. Please use TableCache.get(element.getKey()) here. You will also need to add a @Setup method to initialize the TableCache with TableCache.setup(scanConfig);.

Suggested change
// TODO: use TableCache here
Table table = scanConfig.getTable();
Table table = TableCache.get(element.getKey());

Comment on lines +133 to +134
// TODO: use TableCache
Table table = scanConfig.getTable();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata. Please use TableCache.get(element.getKey().getTableIdentifierString()) here. You will also need to add TableCache.setup(scanConfig); to the @Setup method.

Suggested change
// TODO: use TableCache
Table table = scanConfig.getTable();
Table table = TableCache.get(element.getKey().getTableIdentifierString());

Row id = structToBeamRow(ordinal, recId, recordIdSchema, rowIdWithOrdinalBeamSchema);
outputReceiver.get(keyedTag).outputWithTimestamp(KV.of(id, row), timestamp);
} else { // fast path
System.out.printf("[UNIFORM] -- Output(%s, %s)\n%s%n", ordinal, timestamp, row);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This System.out.printf seems to be for debugging and should be removed before merging.

Comment on lines +47 to +84
if (hasInserts && hasDeletes) {
// UPDATE: row ID exists in both streams
// - emit all deletes as 'UPDATE_BEFORE', and all inserts as 'UPDATE_AFTER'
// - emit extra inserts as 'UPDATE_AFTER'
// - ignore extra deletes (TODO: double check if this is a good decision)
Iterator<TimestampedValue<Row>> deletesIterator = deletes.iterator();
Iterator<TimestampedValue<Row>> insertsIterator = inserts.iterator();
while (deletesIterator.hasNext() && insertsIterator.hasNext()) {
// TODO: output as UPDATE_BEFORE kind
TimestampedValue<Row> updateBefore = deletesIterator.next();
out.outputWithTimestamp(updateBefore.getValue(), updateBefore.getTimestamp());
System.out.printf("[MIXED] -- UpdateBefore\n%s\n", updateBefore);

// TODO: output as UPDATE_AFTER kind
TimestampedValue<Row> updateAfter = insertsIterator.next();
out.outputWithTimestamp(updateAfter.getValue(), updateAfter.getTimestamp());
System.out.printf("[MIXED] -- UpdateAfter\n%s\n", updateAfter);
}
while (insertsIterator.hasNext()) {
// TODO: output as UPDATE_AFTER kind
TimestampedValue<Row> insert = insertsIterator.next();
out.outputWithTimestamp(insert.getValue(), insert.getTimestamp());
System.out.printf("[MIXED] -- Added(extra)\n%s\n", insert);
}
} else if (hasInserts) {
// INSERT only
for (TimestampedValue<Row> rec : inserts) {
System.out.printf("[MIXED] -- Added\n%s\n", rec);
out.outputWithTimestamp(rec.getValue(), rec.getTimestamp());
}
} else if (hasDeletes) {
// DELETE only
for (TimestampedValue<Row> rec : deletes) {
// TODO: output as DELETE kind
System.out.printf("[MIXED] -- Deleted\n%s\n", rec);
out.outputWithTimestamp(rec.getValue(), rec.getTimestamp());
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This method contains several System.out.printf statements that appear to be for debugging. They should be removed before merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant