Skip to content

Commit aa87b9e

Browse files
author
Rolef Heinrich
authored
Merge pull request #110 from IBMStreams/develop
Release 3.1.0
2 parents bcc1546 + 8d78346 commit aa87b9e

File tree

3 files changed

+113
-194
lines changed

3 files changed

+113
-194
lines changed
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
# Changes
2+
==========
3+
## v3.1.0:
4+
* [IBMStreams/streamsx.messagehub/#109](https://github.com/IBMStreams/streamsx.messagehub/issues/109) no support for static consumer group membership
5+
6+
## v3.0.4:
7+
* [IBMStreams/streamsx.kafka/#203](https://github.com/IBMStreams/streamsx.kafka/issues/203) KafkaConsumer: assign output attributes via index rather than attribute name
8+
* [#105](https://github.com/IBMStreams/streamsx.messagehub/issues/105) Make main composites of samples public.
9+
This allows using the samples with _streamsx_ Python package.
10+
* [IBMStreams/streamsx.kafka/#208](https://github.com/IBMStreams/streamsx.kafka/issues/208) KafkaProducer: message or key attribute with underline causes error at context checker.
11+
All previous versions back to 1.0.0 are affected by this issue.
12+
* New sample: [KafkaAvroSample](https://github.com/IBMStreams/streamsx.kafka/tree/develop/samples/KafkaAvroSample)
13+
14+
## v3.0.3:
15+
* [IBMStreams/streamsx.kafka/#198](https://github.com/IBMStreams/streamsx.kafka/issues/198) - The "nConsecutiveRuntimeExc" variable never reaches 50 when exceptions occur
16+
17+
## v3.0.2
18+
* [IBMStreams/streamsx.kafka/#200](https://github.com/IBMStreams/streamsx.kafka/issues/200) - I18n update
19+
20+
## v3.0.1
21+
* [IBMStreams/streamsx.kafka/#196](https://github.com/IBMStreams/streamsx.kafka/issues/196) - KafkaProducer: Consistent region reset can trigger addtional reset
22+
23+
## v3.0.0
24+
### Changes and enhancements
25+
* The included Kafka client has been upgraded from version 2.2.1 to 2.3.1.
26+
* The schema of the output port of the `MessageHubProducer` operator supports optional types for the error description.
27+
* The optional input port of the `MessageHubConsumer` operator can be used to change the *topic subscription*, not only the *partition assignment*.
28+
* The **guaranteeOrdering** parameter now enables the idempotent producer when set to `true`, which allows a higher throughput by allowing more
29+
in-flight requests per connection (requires Kafka server version 0.11 or higher).
30+
* The `MessageHubConsumer` operator now enables and benefits from group management when the user does not specify a group identifier.
31+
* Checkpoint reset of the `MessageHubConsumer` is optimized in consistent region when the consumer is the only group member.
32+
* The `MessageHubConsumer` operator now uses `read_committed` as the default `isolation.level` configuration unless the user has specified a different value.
33+
In `read_committed` mode, the consumer will read only those transactional messages which have been successfully committed.
34+
Messages of aborted transactions are now skipped. The consumer will continue to read non-transactional messages as before.
35+
This new default setting is incompatible with Kafka 0.10.2.
36+
37+
### Deprecated features
38+
The use of the input control port has been deprecated when the `MessageHubConsumer` is used in a consistent region.
39+
40+
### Removed features
41+
The function, which has been deprecated in toolkit version 2.x has been removed in this toolkit version. The removed function contains following items:
42+
* The **messageHubCredentialsFile** operator parameter has been removed from all operators. Please use the **credentialsFile** parameter instead.
43+
* The default filename `etc/messagehub.json` for specifying Event Streams service credentials is not read any more. Please use `etc/eventstreams.json` when you want to use a default filename for service credentials.
44+
* The default name `messagehub` for an application configuration is not read any more. Please use `eventstreams` as name for the application configuration when you want to use a default.
45+
* The property name `messagehub.creds` within an application configuration is not read any more. Please name the property for the Event Streams credentials `eventstreams.creds`.
46+
47+
### Incompatible changes
48+
* The toolkit requires at minimum Streams version 4.3.
49+
* When the `MessageHubConsumer` operator is configured with input port, the **topic**, **pattern**, **partition**, and **startPosition**
50+
parameters used to be ignored in previous versions. Now an SPL compiler failure is raised when one of these parameters is used
51+
together with the input port.
52+
* The deprecated function has been removed.
53+
54+
## v2.2.1
55+
* [IBMStreams/streamsx.kafka/#179](https://github.com/IBMStreams/streamsx.kafka/issues/179) - KafkaProducer: Lost output tuples on FinalMarker reception
56+
57+
## v2.2.0
58+
* The `MessageHubProducer` operator supports an optional output port, configurable via the new **outputErrorsOnly** operator parameter
59+
* Exception handling of the `MessageHubProducer` operator in autonomous region changed. The operator does not abort its PE anymore; it recovers internally instead.
60+
* New custom metrics for the `MessageHubProducer` operator: `nFailedTuples`, `nPendingTuples`, and `nQueueFullPause`
61+
62+
## v2.1.0
63+
### Changes and enhancements
64+
* This toolkit version has been tested also with Kafka 2.3
65+
* [IBMStreams/streamsx.kafka/#169](https://github.com/IBMStreams/streamsx.kafka/issues/169) new optional operator parameter **sslDebug**. For debugging SSL issues see also the [Kafka toolkit documentation](https://ibmstreams.github.io/streamsx.kafka/docs/user/debugging_ssl_issues/)
66+
* [IBMStreams/streamsx.kafka/#167](https://github.com/IBMStreams/streamsx.kafka/issues/167) changed default values for following consumer and producer configurations:
67+
68+
- `client.dns.lookup = use_all_dns_ips`
69+
- `reconnect.backoff.max.ms = 10000` (Kafka's default is 1000)
70+
- `reconnect.backoff.ms = 250` (Kafka's default is 50)
71+
- `retry.backoff.ms = 500` (Kafka's default is 100)
72+
73+
* Changed exception handling for the MessageHubProducer when not used in a consistent region: https://github.com/IBMStreams/streamsx.kafka/issues/163#issuecomment-505402607
74+
75+
### Bug fixes
76+
* [IBMStreams/streamsx.kafka/#163](https://github.com/IBMStreams/streamsx.kafka/issues/163) MessageHubProducer's exception handling makes the operator lose tuples when in CR
77+
* [IBMStreams/streamsx.kafka/#164](https://github.com/IBMStreams/streamsx.kafka/issues/164) on reset() the MessageHubProducerOperator should instantiate a new producer instance
78+
* [IBMStreams/streamsx.kafka/#166](https://github.com/IBMStreams/streamsx.kafka/issues/166) Resource leak in MessageHubProducer when reset to initial state in a CR
79+
80+
## v2.0.2
81+
* [IBMStreams/streamsx.kafka/#171](https://github.com/IBMStreams/streamsx.kafka/issues/171) Resetting from checkpoint will fail when sequence id is >1000
82+
83+
## v2.0.1
84+
* [#91](https://github.com/IBMStreams/streamsx.messagehub/issues/91) - Operators fail to parse credentials
85+
86+
## v2.0.0
87+
### Changes and enhancements
88+
89+
* The included Kafka client has been upgraded from version 2.1.1 to 2.2.1, [#98](https://github.com/IBMStreams/streamsx.messagehub/issues/98)
90+
* Support for Kafka broker 2.2 has been added, [IBMStreams/streamsx.kafka/#161](https://github.com/IBMStreams/streamsx.kafka/issues/161)
91+
- The toolkit has enhancements for the **MessageHubConsumer** when it is used in an autonomous region (i.e. not part of a consistent region):
92+
- The MessageHubConsumer operator can now participate in a consumer group with **startPosition** parameter values `Beginning`, 'End`, and `Time`, [IBMStreams/streamsx.kafka/#94](https://github.com/IBMStreams/streamsx.kafka/issues/94)
93+
- After re-launch of the PE, the MessageHubConsumer operator does not overwrite the initial fetch offset to what the **startPosition** parameter is, i.e. after PE re-launch the consumer starts consuming at last committed offset, [IBMStreams/streamsx.kafka/#107](https://github.com/IBMStreams/streamsx.kafka/issues/107)
94+
95+
The new **startPosition** handling requires that the application always includes a **JobControlPlane** operator when **startPosition** is different from `Default`.
96+
97+
### Incompatible changes
98+
99+
The behavior of the MessageHubConsumer operator changes when
100+
1. the operator is *not* used in a consistent region, and
101+
1. the **startPosition** parameter is used with `Beginning`, `End`, `Time`, or` Offset`.
102+
103+
In all other cases the behavior of the MessageHubConsumer is unchanged. Details of the changed behavior, including sample code that breaks, can be found in the [Toolkit documentation on Github](https://ibmstreams.github.io/streamsx.kafka/docs/user/kafka_toolkit_1_vs_2/).
104+
105+
## v1.9.4
106+
* [IBMStreams/streamsx.kafka/#171](https://github.com/IBMStreams/streamsx.kafka/issues/171) Resetting from checkpoint will fail when sequence id is >1000
107+
108+
## Older releases
109+
Please consult the [release notes](https://github.com/IBMStreams/streamsx.messagehub/releases) for the release you are interested in.

com.ibm.streamsx.messagehub/impl/java/src/com/ibm/streamsx/messagehub/operators/MessageHubConsumerOperator.java

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -124,14 +124,6 @@ public class MessageHubConsumerOperator extends AbstractKafkaConsumerOperator {
124124
private String credentials = null;
125125
private ServiceCredentialsUtil credentialsUtil = null;
126126

127-
/**
128-
* This method hides the parameter 'staticGroupMember' from the base class
129-
* as static group membership is not (yet) supported in event streams.
130-
* @see com.ibm.streamsx.kafka.operators.AbstractKafkaConsumerOperator#setStaticGroupMember(boolean)
131-
*/
132-
@Override
133-
public void setStaticGroupMember (boolean staticGrpMember) {
134-
}
135127

136128
@Parameter (optional = true, name = "credentials", description = SplDoc.PARAM_CREDENTIALS)
137129
public void setCredentials (String credentials) {
@@ -245,6 +237,8 @@ protected void loadFromAppConfig() throws Exception {
245237
+ "\\n"
246238
+ KafkaSplDoc.CONSUMER_KAFKA_GROUP_MANAGEMENT
247239
+ "\\n"
240+
+ KafkaSplDoc.CONSUMER_STATIC_GROUP_MEMBERSHIP
241+
+ "\\n"
248242
+ KafkaSplDoc.CONSUMER_CHECKPOINTING_CONFIG
249243
+ "\\n"
250244
+ KafkaSplDoc.CONSUMER_RESTART_BEHAVIOUR

0 commit comments

Comments
 (0)