diff --git a/examples/README.md b/examples/README.md
index b51d560d7bb..a07d1b38a14 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -27,114 +27,32 @@ before trying out the examples.
- [Json serialization](src/main/java/io/grpc/examples/advanced)
--
- Hedging
-
- The [hedging example](src/main/java/io/grpc/examples/hedging) demonstrates that enabling hedging
- can reduce tail latency. (Users should note that enabling hedging may introduce other overhead;
- and in some scenarios, such as when some server resource gets exhausted for a period of time and
- almost every RPC during that time has high latency or fails, hedging may make things worse.
- Setting a throttle in the service config is recommended to protect the server from too many
- inappropriate retry or hedging requests.)
-
- The server and the client in the example are basically the same as those in the
- [hello world](src/main/java/io/grpc/examples/helloworld) example, except that the server mimics a
- long tail of latency, and the client sends 2000 requests and can turn on and off hedging.
-
- To mimic the latency, the server randomly delays the RPC handling by 2 seconds at 10% chance, 5
- seconds at 5% chance, and 10 seconds at 1% chance.
-
- When running the client enabling the following hedging policy
-
- ```json
- "hedgingPolicy": {
- "maxAttempts": 3,
- "hedgingDelay": "1s"
- }
- ```
- Then the latency summary in the client log is like the following
-
- ```text
- Total RPCs sent: 2,000. Total RPCs failed: 0
- [Hedging enabled]
- ========================
- 50% latency: 0ms
- 90% latency: 6ms
- 95% latency: 1,003ms
- 99% latency: 2,002ms
- 99.9% latency: 2,011ms
- Max latency: 5,272ms
- ========================
- ```
-
- See [the section below](#to-build-the-examples) for how to build and run the example. The
- executables for the server and the client are `hedging-hello-world-server` and
- `hedging-hello-world-client`.
-
- To disable hedging, set environment variable `DISABLE_HEDGING_IN_HEDGING_EXAMPLE=true` before
- running the client. That produces a latency summary in the client log like the following
-
- ```text
- Total RPCs sent: 2,000. Total RPCs failed: 0
- [Hedging disabled]
- ========================
- 50% latency: 0ms
- 90% latency: 2,002ms
- 95% latency: 5,002ms
- 99% latency: 10,004ms
- 99.9% latency: 10,007ms
- Max latency: 10,007ms
- ========================
- ```
-
-
-
--
- Retrying
-
- The [retrying example](src/main/java/io/grpc/examples/retrying) provides a HelloWorld gRPC client &
- server which demos the effect of client retry policy configured on the [ManagedChannel](
- ../api/src/main/java/io/grpc/ManagedChannel.java) via [gRPC ServiceConfig](
- https://github.com/grpc/grpc/blob/master/doc/service_config.md). Retry policy implementation &
- configuration details are outlined in the [proposal](https://github.com/grpc/proposal/blob/master/A6-client-retries.md).
-
- This retrying example is very similar to the [hedging example](src/main/java/io/grpc/examples/hedging) in its setup.
- The [RetryingHelloWorldServer](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldServer.java) responds with
- a status UNAVAILABLE error response to a specified percentage of requests to simulate server resource exhaustion and
- general flakiness. The [RetryingHelloWorldClient](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldClient.java) makes
- a number of sequential requests to the server, several of which will be retried depending on the configured policy in
- [retrying_service_config.json](src/main/resources/io/grpc/examples/retrying/retrying_service_config.json). Although
- the requests are blocking unary calls for simplicity, these could easily be changed to future unary calls in order to
- test the result of request concurrency with retry policy enabled.
-
- One can experiment with the [RetryingHelloWorldServer](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldServer.java)
- failure conditions to simulate server throttling, as well as alter policy values in the [retrying_service_config.json](
- src/main/resources/io/grpc/examples/retrying/retrying_service_config.json) to see their effects. To disable retrying
- entirely, set environment variable `DISABLE_RETRYING_IN_RETRYING_EXAMPLE=true` before running the client.
- Disabling the retry policy should produce many more failed gRPC calls as seen in the output log.
-
- See [the section below](#to-build-the-examples) for how to build and run the example. The
- executables for the server and the client are `retrying-hello-world-server` and
- `retrying-hello-world-client`.
-
-
-
--
- Health Service
-
- The [health service example](src/main/java/io/grpc/examples/healthservice)
- provides a HelloWorld gRPC server that doesn't like short names along with a
- health service. It also provides a client application which makes HelloWorld
- calls and checks the health status.
-
- The client application also shows how the round robin load balancer can
- utilize the health status to avoid making calls to a service that is
- not actively serving.
-
+- [Hedging example](src/main/java/io/grpc/examples/hedging)
+- [Retrying example](src/main/java/io/grpc/examples/retrying)
+
+- [Health Service example](src/main/java/io/grpc/examples/healthservice)
- [Keep Alive](src/main/java/io/grpc/examples/keepalive)
+- [Cancellation](src/main/java/io/grpc/examples/cancellation)
+
+- [Custom Load Balance](src/main/java/io/grpc/examples/customloadbalance)
+
+- [Deadline](src/main/java/io/grpc/examples/deadline)
+
+- [Error Details](src/main/java/io/grpc/examples/errordetails)
+
+- [GRPC Proxy](src/main/java/io/grpc/examples/grpcproxy)
+
+- [Load Balance](src/main/java/io/grpc/examples/loadbalance)
+
+- [Multiplex](src/main/java/io/grpc/examples/multiplex)
+
+- [Name Resolve](src/main/java/io/grpc/examples/nameresolve)
+
+- [Pre-Serialized Messages](src/main/java/io/grpc/examples/preserialized)
+
### To build the examples
1. **[Install gRPC Java library SNAPSHOT locally, including code generation plugin](../COMPILING.md) (Only need this step for non-released versions, e.g. master HEAD).**
diff --git a/examples/example-alts/example-alts/README.md b/examples/example-alts/README.md
similarity index 100%
rename from examples/example-alts/example-alts/README.md
rename to examples/example-alts/README.md
diff --git a/examples/src/main/java/io/grpc/examples/advanced/README.md b/examples/src/main/java/io/grpc/examples/advanced/README.md
new file mode 100644
index 00000000000..f5b5c6cc7fc
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/advanced/README.md
@@ -0,0 +1,16 @@
+gRPC JSON Serialization Example
+=====================
+
+gRPC is a modern high-performance framework for building Remote Procedure Call (RPC) systems.
+It commonly uses Protocol Buffers (Protobuf) as its serialization format, which is compact and efficient.
+However, gRPC can also support JSON serialization when needed, typically for interoperability with
+systems or clients that do not use Protobuf.
+This is an advanced example of how to swap out the serialization logic, Normal users do not need to do this.
+This code is not intended to be a production-ready implementation, since JSON encoding is slow.
+Additionally, JSON serialization as implemented may be not resilient to malicious input.
+
+This advanced example uses Marshaller for JSON which marshals in the Protobuf 3 format described here
+https://developers.google.com/protocol-buffers/docs/proto3#json
+
+If you are considering implementing your own serialization logic, contact the grpc team at
+https://groups.google.com/forum/#!forum/grpc-io
diff --git a/examples/src/main/java/io/grpc/examples/cancellation/README.md b/examples/src/main/java/io/grpc/examples/cancellation/README.md
new file mode 100644
index 00000000000..6b11a17c517
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/cancellation/README.md
@@ -0,0 +1,18 @@
+gRPC Cancellation Example
+=====================
+
+When a gRPC client is no longer interested in the result of an RPC call,
+it may cancel to signal this discontinuation of interest to the server.
+
+Any abort of an ongoing RPC is considered "cancellation" of that RPC.
+The common causes of cancellation are the client explicitly cancelling, the deadline expires, and I/O failures.
+The service is not informed the reason for the cancellation.
+
+There are two APIs for services to be notified of RPC cancellation: io.grpc.Context and ServerCallStreamObserver
+
+Context listeners are called on a different thread, so need to be thread-safe.
+The ServerCallStreamObserver cancellation callback is called like other StreamObserver callbacks,
+so the application may not need thread-safe handling.
+Both APIs have thread-safe isCancelled() polling methods.
+
+Refer the gRPC documentation for details on Cancellation of RPCs https://grpc.io/docs/guides/cancellation/
diff --git a/examples/src/main/java/io/grpc/examples/customloadbalance/README.md b/examples/src/main/java/io/grpc/examples/customloadbalance/README.md
new file mode 100644
index 00000000000..20dbccb81ac
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/customloadbalance/README.md
@@ -0,0 +1,19 @@
+gRPC Custom Load Balance Example
+=====================
+
+One of the key features of gRPC is load balancing, which allows requests from clients to be distributed across multiple servers.
+This helps prevent any one server from becoming overloaded and allows the system to scale up by adding more servers.
+
+A gRPC load balancing policy is given a list of server IP addresses by the name resolver.
+The policy is responsible for maintaining connections (subchannels) to the servers and picking a connection to use when an RPC is sent.
+
+This example gives the details about how we can implement our own custom load balance policy, If the built-in policies does not meet your requirements
+and follow below steps for the same.
+
+ - Register your implementation in the load balancer registry so that it can be referred to from the service config
+ - Parse the JSON configuration object of your implementation. This allows your load balancer to be configured in the service config with any arbitrary JSON you choose to support
+ - Manage what backends to maintain a connection with
+ - Implement a picker that will choose which backend to connect to when an RPC is made. Note that this needs to be a fast operation as it is on the RPC call path
+ - To enable your load balancer, configure it in your service config
+
+Refer the gRPC documentation for more details https://grpc.io/docs/guides/custom-load-balancing/
diff --git a/examples/src/main/java/io/grpc/examples/deadline/README.md b/examples/src/main/java/io/grpc/examples/deadline/README.md
new file mode 100644
index 00000000000..3c7646f1e5f
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/deadline/README.md
@@ -0,0 +1,15 @@
+gRPC Deadline Example
+=====================
+
+A Deadline is used to specify a point in time past which a client is unwilling to wait for a response from a server.
+This simple idea is very important in building robust distributed systems.
+Clients that do not wait around unnecessarily and servers that know when to give up processing requests will improve the resource utilization and latency of your system.
+
+Note that while some language APIs have the concept of a deadline, others use the idea of a timeout.
+When an API asks for a deadline, you provide a point in time which the call should not go past.
+A timeout is the max duration of time that the call can take.
+A timeout can be converted to a deadline by adding the timeout to the current time when the application starts a call.
+
+This Example gives usage and implementation of Deadline on Server, Client and Propagation.
+
+Refer the gRPC documentation for more details on Deadlines https://grpc.io/docs/guides/deadlines/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/errordetails/README.md b/examples/src/main/java/io/grpc/examples/errordetails/README.md
new file mode 100644
index 00000000000..8f241ba37a7
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/errordetails/README.md
@@ -0,0 +1,16 @@
+gRPC Error Details Example
+=====================
+
+If a gRPC call completes successfully the server returns an OK status to the client (depending on the language the OK status may or may not be directly used in your code).
+But what happens if the call isn’t successful?
+
+This Example gives the usage and implementation of how return the error details if gRPC call not successful or fails
+and how to set and read com.google.rpc.Status objects as google.rpc.Status error details.
+
+gRPC allows detailed error information to be encapsulated in protobuf messages, which are sent alongside the status codes.
+
+If an error occurs, gRPC returns one of its error status codes with error message that provides further error details about what happened.
+
+Refer the below links for more details on error details and status codes
+- https://grpc.io/docs/guides/error/
+- https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/Status.java
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/errorhandling/README.md b/examples/src/main/java/io/grpc/examples/errorhandling/README.md
new file mode 100644
index 00000000000..a920e939c86
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/errorhandling/README.md
@@ -0,0 +1,27 @@
+gRPC Error Handling Example
+=====================
+
+Error handling in gRPC is a critical aspect of designing reliable and robust distributed systems.
+gRPC provides a standardized mechanism for handling errors using status codes, error details, and optional metadata.
+
+This Example gives the usage and implementation of how to handle the Errors/Exceptions in gRPC,
+shows how to extract error information from a failed RPC and setting and reading RPC error details.
+
+If a gRPC call completes successfully the server returns an OK status to the client (depending on the language the OK status may or may not be directly used in your code).
+
+If an error occurs gRPC returns one of its error status codes with error message that provides further error details about what happened.
+
+Error Propagation:
+- When an error occurs on the server, gRPC stops processing the RPC and sends the error (status code, description, and optional details) to the client.
+- On the client side, the error can be handled based on the status code.
+
+Client Side Error Handling:
+ - The gRPC client typically throws an exception or returns an error object when an RPC fails.
+
+Server Side Error Handling:
+- Servers use the gRPC API to return errors explicitly using the grpc library's status functions.
+
+gRPC uses predefined status codes to represent the outcome of an RPC call. These status codes are part of the Status object that is sent from the server to the client.
+Each status code is accompanied by a human-readable description(Please refer https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/Status.java)
+
+Refer the gRPC documentation for more details on Error Handling https://grpc.io/docs/guides/error/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/experimental/README.md b/examples/src/main/java/io/grpc/examples/experimental/README.md
new file mode 100644
index 00000000000..295b0801538
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/experimental/README.md
@@ -0,0 +1,13 @@
+gRPC Compression Example
+=====================
+
+This example shows how clients can specify compression options when performing RPCs,
+and how to enable compressed(i,e gzip) requests/responses for only particular method and in case of all methods by using the interceptors.
+
+Compression is used to reduce the amount of bandwidth used when communicating between client/server or peers and
+can be enabled or disabled based on call or message level for all languages.
+
+gRPC allows asymmetrically compressed communication, whereby a response may be compressed differently with the request,
+or not compressed at all.
+
+Refer the gRPC documentation for more details on Compression https://grpc.io/docs/guides/compression/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/grpcproxy/README.md b/examples/src/main/java/io/grpc/examples/grpcproxy/README.md
new file mode 100644
index 00000000000..cc13dc3d9d0
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/grpcproxy/README.md
@@ -0,0 +1,22 @@
+gRPC Proxy Example
+=====================
+
+A gRPC proxy is a component or tool that acts as an intermediary between gRPC clients and servers,
+facilitating communication while offering additional capabilities.
+Proxies are used in scenarios where you need to handle tasks like load balancing, routing, monitoring,
+or providing a bridge between gRPC and other protocols.
+
+GrpcProxy itself can be used unmodified to proxy any service for both unary and streaming.
+It doesn't care what type of messages are being used.
+The Registry class causes it to be called for any inbound RPC, and uses plain bytes for messages which avoids marshalling
+messages and the need for Protobuf schema information.
+
+We can run the Grpc Proxy with Route guide example to see how it works by running the below
+
+Route guide has unary and streaming RPCs which makes it a nice showcase, and we can run each in a separate terminal window.
+
+./build/install/examples/bin/route-guide-server
+./build/install/examples/bin/grpc-proxy
+./build/install/examples/bin/route-guide-client localhost:8981
+
+you can verify the proxy is being used by shutting down the proxy and seeing the client fail.
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/header/README.md b/examples/src/main/java/io/grpc/examples/header/README.md
new file mode 100644
index 00000000000..1563a2799cc
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/header/README.md
@@ -0,0 +1,16 @@
+gRPC Custom Header Example
+=====================
+
+This example gives the usage and implementation of how to create and process(send/receive) the custom headers between Client and Server
+using the interceptors (HeaderServerInterceptor, ClientServerInterceptor) along with Metadata.
+
+Metadata is a side channel that allows clients and servers to provide information to each other that is associated with an RPC.
+gRPC metadata is a key-value pair of data that is sent with initial or final gRPC requests or responses.
+It is used to provide additional information about the call, such as authentication credentials,
+tracing information, or custom headers.
+
+gRPC metadata can be used to send custom headers to the server or from the server to the client.
+This can be used to implement application-specific features, such as load balancing,
+rate limiting or providing detailed error messages from the server to the client.
+
+Refer the gRPC documentation for more on Metadata/Headers https://grpc.io/docs/guides/metadata/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/healthservice/README.md b/examples/src/main/java/io/grpc/examples/healthservice/README.md
new file mode 100644
index 00000000000..181bd70977f
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/healthservice/README.md
@@ -0,0 +1,10 @@
+gRPC Health Service Example
+=====================
+
+The Health Service example provides a HelloWorld gRPC server that doesn't like short names along with a
+health service. It also provides a client application which makes HelloWorld
+calls and checks the health status.
+
+The client application also shows how the round robin load balancer can
+utilize the health status to avoid making calls to a service that is
+not actively serving.
diff --git a/examples/src/main/java/io/grpc/examples/hedging/README.md b/examples/src/main/java/io/grpc/examples/hedging/README.md
new file mode 100644
index 00000000000..0154e5c2cee
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/hedging/README.md
@@ -0,0 +1,59 @@
+gRPC Hedging Example
+=====================
+
+The Hedging example demonstrates that enabling hedging
+can reduce tail latency. (Users should note that enabling hedging may introduce other overhead;
+and in some scenarios, such as when some server resource gets exhausted for a period of time and
+almost every RPC during that time has high latency or fails, hedging may make things worse.
+Setting a throttle in the service config is recommended to protect the server from too many
+inappropriate retry or hedging requests.)
+
+The server and the client in the example are basically the same as those in the
+[hello world](src/main/java/io/grpc/examples/helloworld) example, except that the server mimics a
+long tail of latency, and the client sends 2000 requests and can turn on and off hedging.
+
+To mimic the latency, the server randomly delays the RPC handling by 2 seconds at 10% chance, 5
+seconds at 5% chance, and 10 seconds at 1% chance.
+
+When running the client enabling the following hedging policy
+
+ ```json
+ "hedgingPolicy": {
+ "maxAttempts": 3,
+ "hedgingDelay": "1s"
+ }
+ ```
+Then the latency summary in the client log is like the following
+
+ ```text
+ Total RPCs sent: 2,000. Total RPCs failed: 0
+ [Hedging enabled]
+ ========================
+ 50% latency: 0ms
+ 90% latency: 6ms
+ 95% latency: 1,003ms
+ 99% latency: 2,002ms
+ 99.9% latency: 2,011ms
+ Max latency: 5,272ms
+ ========================
+ ```
+
+See [the section below](#to-build-the-examples) for how to build and run the example. The
+executables for the server and the client are `hedging-hello-world-server` and
+`hedging-hello-world-client`.
+
+To disable hedging, set environment variable `DISABLE_HEDGING_IN_HEDGING_EXAMPLE=true` before
+running the client. That produces a latency summary in the client log like the following
+
+ ```text
+ Total RPCs sent: 2,000. Total RPCs failed: 0
+ [Hedging disabled]
+ ========================
+ 50% latency: 0ms
+ 90% latency: 2,002ms
+ 95% latency: 5,002ms
+ 99% latency: 10,004ms
+ 99.9% latency: 10,007ms
+ Max latency: 10,007ms
+ ========================
+ ```
diff --git a/examples/src/main/java/io/grpc/examples/helloworld/README.md b/examples/src/main/java/io/grpc/examples/helloworld/README.md
new file mode 100644
index 00000000000..5b11d4945c2
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/helloworld/README.md
@@ -0,0 +1,7 @@
+gRPC Hello World Example
+=====================
+This Example gives the details about basic implementation of gRPC Client and Server along with
+how the communication happens between them by sending a greeting message.
+
+Refer the gRPC documentation for more details on helloworld.proto specification, creation of gRPC services and
+methods along with Execution process https://grpc.io/docs/languages/java/quickstart/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/keepalive/README.md b/examples/src/main/java/io/grpc/examples/keepalive/README.md
new file mode 100644
index 00000000000..7b5b72665e7
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/keepalive/README.md
@@ -0,0 +1,16 @@
+gRPC Keepalive Example
+=====================
+
+This example gives the usage and implementation of the Keepalives methods, configurations in gRPC Client and
+Server and how the communication happens between them.
+
+HTTP/2 PING-based keepalives are a way to keep an HTTP/2 connection alive even when there is no data being transferred.
+This is done by periodically sending a PING Frames to the other end of the connection.
+HTTP/2 keepalives can improve performance and reliability of HTTP/2 connections,
+but it is important to configure the keepalive interval carefully.
+
+gRPC sends http2 pings on the transport to detect if the connection is down.
+If the ping is not acknowledged by the other side within a certain period, the connection will be closed.
+Note that pings are only necessary when there's no activity on the connection.
+
+Refer the gRPC documentation for more on Keepalive details and configurations https://grpc.io/docs/guides/keepalive/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/loadbalance/README.md b/examples/src/main/java/io/grpc/examples/loadbalance/README.md
new file mode 100644
index 00000000000..0d19d2f3335
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/loadbalance/README.md
@@ -0,0 +1,20 @@
+gRPC Load Balance Example
+=====================
+
+One of the key features of gRPC is load balancing, which allows requests from clients to be distributed across multiple servers.
+This helps prevent any one server from becoming overloaded and allows the system to scale up by adding more servers.
+
+A gRPC load balancing policy is given a list of server IP addresses by the name resolver.
+The policy is responsible for maintaining connections (subchannels) to the servers and picking a connection to use when an RPC is sent.
+
+By default, the pick_first policy will be used.
+This policy actually does no load balancing but just tries each address it gets from the name resolver and uses the first one it can connect to.
+By updating the gRPC service config you can also switch to using round_robin that connects to every address it gets and rotates through the connected backends for each RPC.
+There are also some other load balancing policies available, but the exact set varies by language.
+
+This example gives the details about how to implement Load Balance in gRPC, If the built-in policies does not meet your requirements
+you can implement your own custom load balance [Custom Load Balance](src/main/java/io/grpc/examples/customloadbalance)
+
+gRPC supports both client side and server side load balancing but by default gRPC uses client side load balancing.
+
+Refer the gRPC documentation for more details on Load Balancing https://grpc.io/blog/grpc-load-balancing/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/multiplex/README.md b/examples/src/main/java/io/grpc/examples/multiplex/README.md
new file mode 100644
index 00000000000..fb24642a41b
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/multiplex/README.md
@@ -0,0 +1,20 @@
+gRPC Multiplex Example
+=====================
+
+gRPC multiplexing refers to the ability of a single gRPC connection to handle multiple independent streams of communication simultaneously.
+This is part of the HTTP/2 protocol on which gRPC is built.
+Each gRPC connection supports multiple streams that can carry different RPCs, making it highly efficient for high-throughput, low-latency communication.
+
+In gRPC, sharing resources like channels and servers can improve efficiency and resource utilization.
+
+- Sharing gRPC Channels and Servers
+
+ 1. Shared gRPC Channel:
+ - A single gRPC channel can be used by multiple stubs, enabling different service clients to communicate over the same connection.
+ - This minimizes the overhead of establishing and managing multiple connections
+
+ 2. Shared gRPC Server:
+ - A single gRPC channel can be used by multiple stubs, enabling different service clients to communicate over the same connection.
+ - This minimizes the overhead of establishing and managing multiple connections
+
+This example demonstrates how to implement a gRPC server that serves both a GreetingService and an EchoService, and a client that shares a single channel across multiple stubs for both services.
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/nameresolve/README.md b/examples/src/main/java/io/grpc/examples/nameresolve/README.md
new file mode 100644
index 00000000000..36c8d7e2a6b
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/nameresolve/README.md
@@ -0,0 +1,22 @@
+gRPC Name Resolve Example
+=====================
+
+This example explains standard name resolution process and how to implement it using the Name Resolver component.
+
+Name Resolution is fundamentally about Service Discovery.
+Name Resolution refers to the process of converting a name into an address and
+Name Resolver is the component that implements the Name Resolution process.
+
+When sending gRPC Request, Client must determine the IP address of the Service Name,
+By Default DNS Name Resolution will be used when request received from the gRPC client.
+
+The Name Resolver in gRPC is necessary because clients often don’t know the exact IP address or port of the server
+they need to connect to.
+
+The client registers an implementation of a **name resolver provider** to a process-global **registry** close to the start of the process.
+The name resolver provider will be called by the **gRPC library** with a **target strings** intended for the custom name resolver.
+Given that target string, the name resolver provider will return an instance of a **name resolver**,
+which will interact with the client connection to direct the request according to the target string.
+
+Refer the gRPC documentation for more on Name Resolution and Custom Name Resolution
+https://grpc.io/docs/guides/custom-name-resolution/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/preserialized/README.md b/examples/src/main/java/io/grpc/examples/preserialized/README.md
new file mode 100644
index 00000000000..d49b3507d03
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/preserialized/README.md
@@ -0,0 +1,18 @@
+gRPC Pre-Serialized Messages Example
+=====================
+
+This example gives the usage and implementation of pre-serialized request and response messages
+communication/exchange between grpc client and server by using ByteArrayMarshaller which produces
+a byte[] instead of decoding into typical POJOs.
+
+This is a performance optimization that can be useful if you read the request/response from on-disk or a database
+where it is already serialized, or if you need to send the same complicated message to many clients and servers.
+The same approach can avoid deserializing requests/responses, to be stored in a database.
+
+It shows how to modify MethodDescriptor to use bytes as the response instead of HelloReply. By
+adjusting toBuilder() you can choose which of the request and response are bytes.
+The generated bindService() uses ServerCalls to make RPC handlers, Since the generated
+bindService() won't expect byte[] in the AsyncService, this uses ServerCalls directly.
+
+Stubs use ClientCalls to send RPCs, Since the generated stub won't have byte[] in its
+method signature, this uses ClientCalls directly.
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/retrying/README.md b/examples/src/main/java/io/grpc/examples/retrying/README.md
new file mode 100644
index 00000000000..bb29ce75e43
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/retrying/README.md
@@ -0,0 +1,27 @@
+gRPC Retrying Example
+=====================
+
+The Retrying example provides a HelloWorld gRPC client &
+server which demos the effect of client retry policy configured on the [ManagedChannel](
+https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ManagedChannel.java) via [gRPC ServiceConfig](
+https://github.com/grpc/grpc/blob/master/doc/service_config.md). Retry policy implementation &
+configuration details are outlined in the [proposal](https://github.com/grpc/proposal/blob/master/A6-client-retries.md).
+
+This retrying example is very similar to the [hedging example](https://github.com/grpc/grpc-java/tree/master/examples/src/main/java/io/grpc/examples/hedging) in its setup.
+The [RetryingHelloWorldServer](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldServer.java) responds with
+a status UNAVAILABLE error response to a specified percentage of requests to simulate server resource exhaustion and
+general flakiness. The [RetryingHelloWorldClient](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldClient.java) makes
+a number of sequential requests to the server, several of which will be retried depending on the configured policy in
+[retrying_service_config.json](https://github.com/grpc/grpc-java/blob/master/examples/src/main/resources/io/grpc/examples/retrying/retrying_service_config.json). Although
+the requests are blocking unary calls for simplicity, these could easily be changed to future unary calls in order to
+test the result of request concurrency with retry policy enabled.
+
+One can experiment with the [RetryingHelloWorldServer](src/main/java/io/grpc/examples/retrying/RetryingHelloWorldServer.java)
+failure conditions to simulate server throttling, as well as alter policy values in the [retrying_service_config.json](
+https://github.com/grpc/grpc-java/blob/master/examples/src/main/resources/io/grpc/examples/retrying/retrying_service_config.json) to see their effects. To disable retrying
+entirely, set environment variable `DISABLE_RETRYING_IN_RETRYING_EXAMPLE=true` before running the client.
+Disabling the retry policy should produce many more failed gRPC calls as seen in the output log.
+
+See [the section](https://github.com/grpc/grpc-java/tree/master/examples#-to-build-the-examples) for how to build and run the example. The
+executables for the server and the client are `retrying-hello-world-server` and
+`retrying-hello-world-client`.
diff --git a/examples/src/main/java/io/grpc/examples/routeguide/README.md b/examples/src/main/java/io/grpc/examples/routeguide/README.md
new file mode 100644
index 00000000000..2528b26410c
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/routeguide/README.md
@@ -0,0 +1,24 @@
+gRPC Route Guide Example
+=====================
+
+This example illustrates how to implement and use a gRPC server and client for a RouteGuide service,
+which demonstrates all 4 types of gRPC methods (unary, client streaming, server streaming, and bidirectional streaming).
+Additionally, the service loads geographic features from a JSON file [route_guide_db.json](https://github.com/grpc/grpc-java/blob/master/examples/src/main/resources/io/grpc/examples/routeguide/route_guide_db.json) and retrieves features based on latitude and longitude.
+
+The route_guide.proto file defines a gRPC service with 4 types of RPC methods, showcasing different communication patterns between client and server.
+1. Unary RPC
+ - rpc GetFeature(Point) returns (Feature) {}
+2. Server-Side Streaming RPC
+ - rpc ListFeatures(Rectangle) returns (stream Feature) {}
+3. Client-Side Streaming RPC
+ - rpc RecordRoute(stream Point) returns (RouteSummary) {}
+4. Bidirectional Streaming RPC
+ - rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
+
+These RPC methods illustrate the versatility of gRPC in handling various communication patterns,
+from simple request-response interactions to complex bidirectional streaming scenarios.
+
+For more details, refer to the full route_guide.proto file on GitHub: https://github.com/grpc/grpc-java/blob/master/examples/src/main/proto/route_guide.proto
+
+Refer the gRPC documentation for more details on creation, build and execution of route guide example with explanation
+https://grpc.io/docs/languages/java/basics/
\ No newline at end of file
diff --git a/examples/src/main/java/io/grpc/examples/waitforready/README.md b/examples/src/main/java/io/grpc/examples/waitforready/README.md
new file mode 100644
index 00000000000..1e294b453b6
--- /dev/null
+++ b/examples/src/main/java/io/grpc/examples/waitforready/README.md
@@ -0,0 +1,29 @@
+gRPC Wait-For-Ready Example
+=====================
+
+This example gives the usage and implementation of the Wait-For-Ready feature.
+
+This feature can be activated on a client stub, ensuring that Remote Procedure Calls (RPCs) are held until the server is ready to receive them.
+By waiting for the server to become available before sending requests, this mechanism enhances reliability,
+particularly in situations where server availability may be delayed or unpredictable.
+
+When an RPC is initiated and the channel fails to connect to the server, its behavior depends on the Wait-for-Ready option:
+
+- Without Wait-for-Ready (Default Behavior):
+
+ - The RPC will immediately fail if the channel cannot establish a connection, providing prompt feedback about the connectivity issue.
+
+- With Wait-for-Ready:
+
+ - The RPC will not fail immediately. Instead, it will be queued and will wait until the connection is successfully established.
+ This approach is beneficial for handling temporary network disruptions more gracefully, ensuring the RPC is eventually executed once the connection is ready.
+
+
+Example gives the Simple client that requests a greeting from the HelloWorldServer and defines waitForReady on the stub.
+
+To test this flow need to follow below steps:
+- run this client without a server running(client rpc should hang)
+- start the server (client rpc should complete)
+- run this client again (client rpc should complete nearly immediately)
+
+Refer the gRPC documentation for more on Wait-For-Ready https://grpc.io/docs/guides/wait-for-ready/
\ No newline at end of file