-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Iceberg CDC streaming source #37191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Iceberg CDC streaming source #37191
Conversation
Summary of ChangesHello @ahmedabu98, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a robust Iceberg Change Data Capture (CDC) streaming source for Apache Beam. It allows Beam pipelines to consume a continuous stream of changes from an Iceberg table, including inserts, deletes, and updates. The implementation intelligently handles different types of change events, processing straightforward additions or removals directly, while employing a sophisticated reconciliation step for complex update scenarios. This significantly expands Beam's capabilities for real-time data integration with Iceberg. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant new feature: an Iceberg CDC (Change Data Capture) streaming source. The overall design is solid, leveraging Beam's splittable DoFns and CoGroupByKey for reconciling changes. The code is well-structured into different components for scanning, reading, and reconciling changelogs.
I've identified a few areas for improvement, mainly around cleaning up TODO comments and debugging statements. My specific comments are below.
There is one critical issue I couldn't comment on directly due to tooling limitations: in SerializableDataFile.java, the equals and hashCode methods have not been updated to include the newly added fields (dataSequenceNumber, fileSequenceNumber, firstRowId). This violates the Object contract and can lead to subtle bugs. Please ensure this is fixed.
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteFilter( | ||
| FileIO io, | ||
| SerializableChangelogTask scanTask, | ||
| Schema tableSchema, | ||
| Schema projectedSchema, | ||
| List<DeleteFile> deleteFiles) { | ||
| super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } | ||
|
|
||
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteFilter(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) { | ||
| super( | ||
| scanTask.file().location(), | ||
| deleteFiles, | ||
| scanTask.spec().schema(), | ||
| scanTask.spec().schema()); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteReader( | ||
| FileIO io, | ||
| SerializableChangelogTask scanTask, | ||
| Schema tableSchema, | ||
| Schema projectedSchema, | ||
| List<DeleteFile> deleteFiles) { | ||
| super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } | ||
|
|
||
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteReader(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) { | ||
| super( | ||
| scanTask.file().location(), | ||
| deleteFiles, | ||
| scanTask.spec().schema(), | ||
| scanTask.spec().schema()); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // TODO: use TableCache here | ||
| Table table = scanConfig.getTable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata, which can be expensive. Please use TableCache.get(element.getKey()) here. You will also need to add a @Setup method to initialize the TableCache with TableCache.setup(scanConfig);.
| // TODO: use TableCache here | |
| Table table = scanConfig.getTable(); | |
| Table table = TableCache.get(element.getKey()); |
| // TODO: use TableCache | ||
| Table table = scanConfig.getTable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata. Please use TableCache.get(element.getKey().getTableIdentifierString()) here. You will also need to add TableCache.setup(scanConfig); to the @Setup method.
| // TODO: use TableCache | |
| Table table = scanConfig.getTable(); | |
| Table table = TableCache.get(element.getKey().getTableIdentifierString()); |
| Row id = structToBeamRow(ordinal, recId, recordIdSchema, rowIdWithOrdinalBeamSchema); | ||
| outputReceiver.get(keyedTag).outputWithTimestamp(KV.of(id, row), timestamp); | ||
| } else { // fast path | ||
| System.out.printf("[UNIFORM] -- Output(%s, %s)\n%s%n", ordinal, timestamp, row); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if (hasInserts && hasDeletes) { | ||
| // UPDATE: row ID exists in both streams | ||
| // - emit all deletes as 'UPDATE_BEFORE', and all inserts as 'UPDATE_AFTER' | ||
| // - emit extra inserts as 'UPDATE_AFTER' | ||
| // - ignore extra deletes (TODO: double check if this is a good decision) | ||
| Iterator<TimestampedValue<Row>> deletesIterator = deletes.iterator(); | ||
| Iterator<TimestampedValue<Row>> insertsIterator = inserts.iterator(); | ||
| while (deletesIterator.hasNext() && insertsIterator.hasNext()) { | ||
| // TODO: output as UPDATE_BEFORE kind | ||
| TimestampedValue<Row> updateBefore = deletesIterator.next(); | ||
| out.outputWithTimestamp(updateBefore.getValue(), updateBefore.getTimestamp()); | ||
| System.out.printf("[MIXED] -- UpdateBefore\n%s\n", updateBefore); | ||
|
|
||
| // TODO: output as UPDATE_AFTER kind | ||
| TimestampedValue<Row> updateAfter = insertsIterator.next(); | ||
| out.outputWithTimestamp(updateAfter.getValue(), updateAfter.getTimestamp()); | ||
| System.out.printf("[MIXED] -- UpdateAfter\n%s\n", updateAfter); | ||
| } | ||
| while (insertsIterator.hasNext()) { | ||
| // TODO: output as UPDATE_AFTER kind | ||
| TimestampedValue<Row> insert = insertsIterator.next(); | ||
| out.outputWithTimestamp(insert.getValue(), insert.getTimestamp()); | ||
| System.out.printf("[MIXED] -- Added(extra)\n%s\n", insert); | ||
| } | ||
| } else if (hasInserts) { | ||
| // INSERT only | ||
| for (TimestampedValue<Row> rec : inserts) { | ||
| System.out.printf("[MIXED] -- Added\n%s\n", rec); | ||
| out.outputWithTimestamp(rec.getValue(), rec.getTimestamp()); | ||
| } | ||
| } else if (hasDeletes) { | ||
| // DELETE only | ||
| for (TimestampedValue<Row> rec : deletes) { | ||
| // TODO: output as DELETE kind | ||
| System.out.printf("[MIXED] -- Deleted\n%s\n", rec); | ||
| out.outputWithTimestamp(rec.getValue(), rec.getTimestamp()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.