Bill Payment: Electricity
Created on: Oct 11, 2024
Paytm app reads user sms that contains electricity related information such as bill reminder, transaction alerts and payment due data from various source like electricity provider, financial app like google pay, phonepe. The application process the sms and provides actionable notification to the user.
BillSense
Messages are filtered based on billing related criteria and then sent to server using rest api. And the message is published to kafka to kafka topic. BillSense consume these message, parse it and identify key details like bill amount, due date from structured and unstructured format. And message is published to another topic (electricity-event).
Some sample message are
- Dear Customer, your electricity bill for Account Number: 123456789 is due on 15th October 2024. The total amount payable is ₹2,350. Please make the payment by the due date to avoid disconnection. Thank you.
Key details
[ { "account_number": "123456789", "due_date": "2024-10-15", "bill_amount": 2350, "currency": "INR", "message": "Please make the payment by the due date to avoid disconnection.", "userid": "uniqueId", "provider": "North Bihar Power Distribution Company" } ]
Some company of electricity provider in various state
- Andhra Pradesh Central Power Distribution Corporation Limited
- BESCOM -> Bangalore Electricity Supply Company Ltd.
- BSES Rajdhani - Delhi
GridNotify
Consume electricity-bill events and prepare notification message to be sent to each user when bill is due. Some of these notifications are
- Bill Generation Notification
- Due Date Reminder (1 Week Before Due Date and 1 day before data)
- Bill Overdue Notification to convey that disconnection.
- Disconnection warning
After creating these notification, message is produced which is consumed by notification service.
Once notification message is prepared, it is produced to kafka. Notification service consume these notification and send email as well as push notification notification.
Kafka Related config
bin/kafka-topics.sh --create --topic electricity-bill --partitions 3 --replication-factor 3 --bootstrap-server localhost:9092
Coding label points
-
Gridnotify is the consumer service which works in consumer group.
-
Manual offset commit: Message are commited manually in asynchronous manner. This ensure that thread is not blocked and process other message. If there is any error it is retried at application label for a configured retried time. After that It is sent to Dead letter queues.
-
DLQ: If the message can't be processed, it is sent to DLQ to prevent unprocessable message from blocking the thread. This provides a mechanism for later analysis or corrective action on failed messages.
-
Poll time is 1 second for fetching from kafka.
-
We need to create separate consumer object for each thread.
-
Round robin strategy is used for partitioning strategy since we are consuming from same topic.
-
We have set max.poll.interval.ms to 2 second
Logging for multiple consumer
SLF4J with Logback is used for logging. Logging uses asynchronous to log message so that it will not block the main thread. Since multiple instance are running, logs are sent to ELK stack.
clientId, partition, offset
Producer config
linger.ms=5ms batch.size=1024*32 buffer.memory=1024*32 *1024
Consumer config
max.poll.interval.ms=500000 // This property lets you set the length of time during which the consumer can go without polling before it is considered dead. auto.offset.reset=earliest enable.auto.commit=false
16. Single design pattern real use case in project.
- Configuration Manager:
import java.io.FileInputStream; import java.io.IOException; import java.util.Properties; public class ConfigurationManager { private static volatile ConfigurationManager instance; private Properties properties; private ConfigurationManager() { properties = new Properties(); try { // Load properties from file once properties.load(new FileInputStream("config.properties")); } catch (IOException e) { e.printStackTrace(); } } public static ConfigurationManager getInstance() { if (instance == null) { synchronized (ConfigurationManager.class) { if (instance == null) { instance = new ConfigurationManager(); } } } return instance; } public String getProperty(String key) { return properties.getProperty(key); } }
- Database Connection Pool
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.33</version> <!-- Use the latest version --> </dependency>
import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; public class DatabaseConnectionPool { private static DatabaseConnectionPool instance; private List<Connection> connectionPool; private final int MAX_POOL_SIZE = 10; private DatabaseConnectionPool() { connectionPool = new ArrayList<>(); try { for (int i = 0; i < MAX_POOL_SIZE; i++) { connectionPool.add(DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb", "user", "password")); } } catch (SQLException e) { e.printStackTrace(); } } public static DatabaseConnectionPool getInstance() { if (instance == null) { synchronized (DatabaseConnectionPool.class) { if (instance == null) { instance = new DatabaseConnectionPool(); } } } return instance; } public Connection getConnection() { // Return a connection from the pool (basic example) return connectionPool.remove(0); } public void returnConnection(Connection connection) { // Return connection back to pool connectionPool.add(connection); } }
We created Cache Manager, Configuration Manager, ThreadPoolManager class singleton in bill payment application.
