This page was exported from Best Free Exam Guide [ http://free.exams4sures.com ] Export date:Sat Mar 15 6:47:59 2025 / +0000 GMT ___________________________________________________ Title: Pass MuleSoft MCIA-Level-1 Exam With Practice Test Questions Dumps Bundle [Q116-Q139] --------------------------------------------------- Pass MuleSoft MCIA-Level-1 Exam With Practice Test Questions Dumps Bundle 2023 Valid MCIA-Level-1 test answers & MuleSoft Exam PDF Q116. A Mule application contains a Batch Job with two Batch Steps (Batch_Step_l and Batch_Step_2). A payload with 1000 records is received by the Batch Job.How many threads are used by the Batch Job to process records, and how does each Batch Step process records within the Batch Job?  Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and RECORDS are processed IN PARALLEL within and between the two Batch Steps  Each Batch Job uses a SINGLE THREAD for all Batch steps Each Batch step instance receives ONE record at a time as the payload, and RECORDS are processed IN ORDER, first through Batch_Step_l and then through Batch_Step_2  Each Batch Job uses a SINGLE THREAD to process a configured block size of record Each Batch Step instance receives A BLOCK OF records as the payload, and BLOCKS of records are processed IN ORDER  Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and BATCH STEP INSTANCES execute IN PARALLEL to process records and Batch Steps in ANY order as fast as possible * Each Batch Job uses SEVERAL THREADS for the Batch Steps* Each Batch Step instance receives ONE record at a time as the payload. It’s not received in a block, as it does not wait for multiple records to be completed before moving to next batch step. (So Option D is out of choice)* RECORDS are processed IN PARALLEL within and between the two Batch Steps.* RECORDS are not processed in order. Let’s say if second record completes batch_step_1 before record 1, then it moves to batch_step_2 before record 1. (So option C and D are out of choice)* A batch job is the scope element in an application in which Mule processes a message payload as a batch of records. The term batch job is inclusive of all three phases of processing: Load and Dispatch, Process, and On Complete.* A batch job instance is an occurrence in a Mule application whenever a Mule flow executes a batch job. Mule creates the batch job instance in the Load and Dispatch phase. Every batch job instance is identified internally using a unique String known as batch job instance id.Q117. In Anypoint Platform, a company wants to configure multiple identity providers(Idps) for various lines of business (LOBs) Multiple business groups and environments have been defined for the these LOBs. What Anypoint Platform feature can use multiple Idps access the company’s business groups and environment?  User management  Roles and permissions  Dedicated load balancers  Client Management Explanationhttps://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.htmlQ118. Mule application A receives a request Anypoint MQ message REQU with a payload containing a variable-length list of request objects. Application A uses the For Each scope to split the list into individual objects and sends each object as a message to an Anypoint MQ queue.Service S listens on that queue, processes each message independently of all other messages, and sends a response message to a response queue.Application A listens on that response queue and must in turn create and publish a response Anypoint MQ message RESP with a payload containing the list of responses sent by service S in the same order as the request objects originally sent in REQU.Assume successful response messages are returned by service S for all request messages.What is required so that application A can ensure that the length and order of the list of objects in RESP and REQU match, while at the same time maximizing message throughput?  Use a Scatter-Gather within the For Each scope to ensure response message order Configure the Scatter-Gather with a persistent object store  Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU  Use an Async scope within the For Each scope and collect response messages in a second For Each scope in the order In which they arrive, then send RESP using this list of responses  Keep track of the list length and all object indices in REQU, both in the For Each scope and in all communication involving service Use persistent storage when creating RESP Correct answer is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU Explanation: : Using Anypoint MQ, you can create two types of queues: Standard queue These queues don’t guarantee a specific message order. Standard queues are the best fit for applications in which messages must be delivered quickly. FIFO (first in, first out) queue These queues ensure that your messages arrive in order. FIFO queues are the best fit for applications requiring strict message ordering and exactly-once delivery, but in which message delivery speed is of less importance Use of FIFO queue is no where in the option and also it decreased throughput. Similarly persistent object store is not the preferred solution approach when you maximizing message throughput. This rules out one of the options. Scatter Gather does not support ObjectStore. This rules out one of the options. Standard Anypoint MQ queues don’t guarantee a specific message order hence using another for each block to collect response wont work as requirement here is to ensure the order. Hence considering all the above factors the feasible approach is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQUQ119. Refer to the exhibit.A Mule application is deployed to a multi-node Mule runtime cluster. The Mule application uses the competing consumer pattern among its cluster replicas to receive JMS messages from a JMS queue. To process each received JMS message, the following steps are performed in a flow:Step l: The JMS Correlation ID header is read from the received JMS message.Step 2: The Mule application invokes an idempotent SOAP webservice over HTTPS, passing the JMS Correlation ID as one parameter in the SOAP request.Step 3: The response from the SOAP webservice also returns the same JMS Correlation ID.Step 4: The JMS Correlation ID received from the SOAP webservice is validated to be identical to the JMS Correlation ID received in Step 1.Step 5: The Mule application creates a response JMS message, setting the JMS Correlation ID message header to the validated JMS Correlation ID and publishes that message to a response JMS queue.Where should the Mule application store the JMS Correlation ID values received in Step 1 and Step 3 so that the validation in Step 4 can be performed, while also making the overall Mule application highly available, fault-tolerant, performant, and maintainable?  Both Correlation ID values should be stored in a persistent object store  Both Correlation ID values should be stored In a non-persistent object store  The Correlation ID value in Step 1 should be stored in a persistent object store The Correlation ID value in step 3 should be stored as a Mule event variable/attribute  Both Correlation ID values should be stored as Mule event variable/attribute Q120. An external web UI application currently accepts occasional HTTP requests from client web browsers to change (insert, update, or delete) inventory pricing information in an inventory system’s database. Each inventory pricing change must be transformed and then synchronized with multiple customer experience systems in near real-time (in under 10 seconds). New customer experience systems are expected to be added in the future.The database is used heavily and limits the number of SELECT queries that can be made to the database to 10 requests per hour per user.What is the most scalable, idiomatic (used for its intended purpose), decoupled. reusable, and maintainable integration mechanism available to synchronize each inventory pricing change with the various customer experience systems in near real-time?  Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the watermark attribute set to an appropriate database column In the same now, use a Scatter-Gather to call each customer experience system’s REST API with transformed inventory-pricing records  Add a trigger to the inventory-pricing database table so that for each change to the inventory pricing database, a stored procedure is called that makes a REST call to a Mule application Write the Mule application to publish each Mule event as a message to an Anypoint MQ exchange Write other Mule applications to subscribe to the Anypoint MQ exchange, transform each received message, and then update the Mule application’s corresponding customer experience system(s)  Replace the external web UI application with a Mule application to accept HTTP requests from client web browsers In the same Mule application, use a Batch Job scope to test if the database request will succeed, aggregate pricing changes within a short time window, and then update both the inventory pricing database and each customer experience system using a Parallel For Each scope  Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the ID attribute set to an appropriate database column In the same flow, use a Batch Job scope to publish transformed Inventory-pricing records to an Anypoint MQ queue Write other Mule applications to subscribe to the Anypoint MQ queue, transform each received message, and then update the Mule application’s corresponding customer experience system(s) Q121. An API client makes an HTTP request to an API gateway with an Accept header containing the value” application”.What is a valid HTTP response payload for this request in the client requested data format?  <status>healthy</status>  {“status” “healthy”}  status(‘healthy”)  status: healthy Q122. Refer to the exhibit.An organization is designing a Mule application to receive data from one external business partner. The two companies currently have no shared IT infrastructure and do not want to establish one. Instead, all communication should be over the public internet (with no VPN).What Anypoint Connector can be used in the organization’s Mule application to securely receive data from this external business partner?  File connector  VM connector  SFTP connector  Object Store connector * Object Store and VM Store is used for sharing data inter or intra mule applications in same setup. Can’t be used with external Business Partner* Also File connector will not be useful as the two companies currently have no shared IT infrastructure. It’s specific for local use.* Correct answer is SFTP connector. The SFTP Connector implements a secure file transport channel so that your Mule application can exchange files with external resources. SFTP uses the SSH security protocol to transfer messages. You can implement the SFTP endpoint as an inbound endpoint with a one-way exchange pattern, or as an outbound endpoint configured for either a one-way or request-response exchange pattern.Q123. A marketing organization is designing a Mule application to process campaign dat a. The Mule application will periodically check for a file in a SFTP location and process the records in the file. The size of the file can vary from 10MB to 5GB. Due to the limited availabiltty of vCores, the Mule application is deployed to a single CloudHub worker configured with vCore size 0.2.The application must transform and send different formats of this file to three different downstream SFTP locations.What is the most idiomatic (used for its intended purpose) and performant way to configure the SFTP operations or event sources to process the large files to support these deployment requirements?  Use an in-memory repeatable stream  Use a file-stored non-repeatable stream  Use an in-memory non-repeatable stream  Use a file-stored repeatable stream Q124. An Order microservice and a Fulfillment microservice are being designed to communicate with their clients through message-based integration (and NOT through API invocations).The Order microservice publishes an Order message (a kind of command message) containing the details of an order to be fulfilled. The intention is that Order messages are only consumed by one Mule application, the Fulfillment microservice.The Fulfillment microservice consumes Order messages, fulfills the order described therein, and then publishes an OrderFulfilled message (a kind of event message). Each OrderFulfilled message can be consumed by any interested Mule application, and the Order microservice is one such Mule application.What is the most appropriate choice of message broker(s) and message destination(s) in this scenario?  Order messages are sent to an Anypoint MQ exchangeOrderFulfilled messages are sent to an Anypoint MQ queueBoth microservices interact with Anypoint MQ as the message broker, which must therefore scale to support the load of both microservices  Order messages are sent to a JMS queueOrderFulfilled messages are sent to a JMS topicBoth microservices interact with the same JMS provider (message broker) instance, which must therefore scale to support the load of both microservices  Order messages are sent directly to the Fulfillment microservicesOrderFulfilled messages are sent directly to the Order microserviceThe Order microservice interacts with one AMQP-compatible message broker and the Fulfillment microservice interacts with a different AMQP-compatible message broker, so that both message brokers can be chosen and scaled to best support the load of each microservice  Order messages are sent to a JMS queueOrderFulfilled messages are sent to a JMS topicThe Order microservice interacts with one JMS provider (message broker) and the Fulfillment microservice interacts with a different JMS provider, so that both message brokers can be chosen and scaled to best support the load of each microservice Q125. As a part of project requirement, client will send a stream of data to mule application. Payload size can vary between 10mb to 5GB. Mule application is required to transform the data and send across multiple sftp servers. Due to the cost cuttings in the organization, mule application can only be allocated one worker with size of 0.2 vCore.As an integration architect , which streaming strategy you would suggest to handle this scenario?  In-memory non repeatable stream  File based non-repeatable stream  In-memory repeatable stream  File based repeatable storage As the question says that data needs to be sent across multiple sftp serves , we cannot use non-repeatable streams. The non-repeatable strategy disables repeatable streams, which enables you to read an input stream only once.You cant use in memory storage because with 0.2 vcore you will get only 1 GB of heap memory. Hence application will error out for file more than 1 GB.Hence the correct option is file base repeatable streamQ126. What comparison is true about a CloudHub Dedicated Load Balancer (DLB) vs. the CloudHub Shared Load Balancer (SLB)?  Only a DLB allows the configuration of a custom TLS server certificate  Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers  Both a DLB and the SLB allow the configuration of access control via IP whitelists  Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with the lowest workloads * Shared load balancers don’t allow you to configure custom SSL certificates or proxy rules* Dedicated Load Balancer are optional but you need to purchase them additionally if needed.* TLS is a cryptographic protocol that provides communications security for your Mule app. TLS offers many different ways of exchanging keys for authentication, encrypting data, and guaranteeing message integrity.* The CloudHub Shared Load Balancer terminates TLS connections and uses its own server-side certificate.* Only a DLB allows the configuration of a custom TLS server certificate* DLB enables you to define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication.* To use a DLB in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments.* MuleSoft Reference: https://docs.mulesoft.com/runtime-manager/dedicated-load-balancer-tutorial Additional Info on SLB Vs DLB:Q127. An organization is designing a mule application to support an all or nothing transaction between serval database operations and some other connectors so that they all roll back if there is a problem with any of the connectors Besides the database connector , what other connector can be used in the transaction.  VM  Anypoint MQ  SFTP  ObjectStore Q128. A retailer is designing a data exchange interface to be used by its suppliers. The interface must support secure communication over the public internet. The interface must also work with a wide variety of programming languages and IT systems used by suppliers.What are suitable interface technologies for this data exchange that are secure, cross-platform, and internet friendly, assuming that Anypoint Connectors exist for these interface technologies?  EDJFACT XML over SFTP JSON/REST over HTTPS  SOAP over HTTPS HOP over TLS gRPC over HTTPS  XML over ActiveMQ XML over SFTP XML/REST over HTTPS  CSV over FTP YAML over TLS JSON over HTTPS Q129. An organization is migrating all its Mule applications to Runtime Fabric (RTF). None of the Mule applications use Mule domain projects.Currently, all the Mule applications have been manually deployed to a server group among several customer- hosted Mule runtimes. Port conflicts between these Mule application deployments are currently managed by the DevOps team who carefully manage Mule application properties files.When the Mule applications are migrated from the current customer-hosted server group to Runtime Fabric (RTF), do the Mule applications need to be rewritten, and what DevOps port configuration responsibilities change or stay the same?  NO, the Mule applications do NOT need to be rewrittenDevOps MUST STILL manage port conflicts  NO, the Mule applications do NOT need to be rewrittenDevOps NO LONGER needs to manage port conflicts between the Mule applications  YES, the Mule applications MUST be rewrittenDevOps NO LONGER needs to manage port conflicts between the Mule applications  YES, the Mule applications MUST be rewrittenDevOps MUST STILL manage port conflicts Q130. Refer to the exhibit. An organization uses a 2-node Mule runtime cluster to host one stateless API implementation. The API is accessed over HTTPS through a load balancer that uses round-robin for load distribution.Two additional nodes have been added to the cluster and the load balancer has been configured to recognize the new nodes with no other change to the load balancer.What average performance change is guaranteed to happen, assuming all cluster nodes are fully operational?  50% reduction in the response time of the API  100% increase in the throughput of the API  50% reduction in the JVM heap memory consumed by each node  50% reduction in the number of requests being received by each node Q131. An ABC Farms project team is planning to build a new API that is required to work with data from different domains across the organization.The organization has a policy that all project teams should leverage existing investments by reusing existing APIs and related resources and documentation that other project teams have already developed and deployed.To support reuse, where on Anypoint Platform should the project team go to discover and read existing APIs, discover related resources and documentation, and interact with mocked versions of those APIs?  Design Center  API Manager  Runtime Manager  Anypoint Exchange The mocking service is a feature of Anypoint Platform and runs continuously. You can run the mocking service from the text editor, the visual editor, and from Anypoint Exchange. You can simulate calls to the API in API Designer before publishing the API specification to Exchange or in Exchange after publishing the API specification.Q132. A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms maximum (99th percentile) response time. The corresponding API implementation needs to sequentially invoke 3 downstream APIs of very similar complexity. The first of these downstream APIs offers the following SLA for its response time: median: 100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms. If possible, how can a timeout be set in the upstream API for the invocation of the first downstream API to meet the new upstream API’s desired SLA?  Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete  Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it responds  Set a timeout of 50 ms; this times out more invocations of that API but gives additional room for retries  No timeout is possible to meet the upstream API’s desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API Before we answer this question , we need to understand what median (50th percentile) and 80th percentile means. If the 50th percentile (median) of a response time is 500ms that means that 50% of my transactions are either as fast or faster than 500ms.If the 90th percentile of the same transaction is at 1000ms it means that 90% are as fast or faster and only 10% are slower. Now as per upstream SLA , 99th percentile is 800 ms which means 99% of the incoming requests should have response time less than or equal to 800 ms. But as per one of the backend API , their 95th percentile is 1000 ms which means that backend API will take 1000 ms or less than that for 95% of. requests. As there are three API invocation from upstream API , we can not conclude a timeout that can be set to meet the desired SLA as backend SLA’s do not support it.Let see why other answers are not correct.1) Do not set a timeout –> This can potentially violate SLA’s of upstream API2) Set a timeout of 100 ms; —> This will not work as backend API has 100 ms as median meaning only 50% requests will be answered in this time and we will get timeout for 50% of the requests. Important thing to note here is, All APIs need to be executed sequentially, so if you get timeout in first API, there is no use of going to second and third API. As a service provider you wouldn’t want to keep 50% of your consumers dissatisfied. So not the best option to go with.*To quote an example: Let’s assume you have built an API to update customer contact details.– First API is fetching customer number based on login credentials– Second API is fetching Info in 1 table and returning unique key– Third API, using unique key provided in second API as primary key, updating remaining details* Now consider, if API times out in first API and can’t fetch customer number, in this case, it’s useless to call API 2 and 3 and that is why question mentions specifically that all APIs need to be executed sequentially.3) Set a timeout of 50 ms –> Again not possible due to the same reason as above Hence correct answer is No timeout is possible to meet the upstream API’s desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative APIQ133. What comparison is true about a CloudHub Dedicated Load Balancer (DLB) vs. the CloudHub Shared Load Balancer (SLB)?  Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with the lowest workloads  Both a DLB and the SLB allow the configuration of access control via IP whitelists  Only a DLB allows the configuration of a custom TLS server certificate  Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers Q134. An organization’s security policies mandate complete control of the login credentials used to log in to Anypoint Platform.What feature of Anypoint Platform should be used to meet this requirement?  Federated Client Management  Federated Identity Management  Enterprise Security Module  Client ID Secret Explanation/Reference: https://docs.mulesoft.com/access-management/external-identityQ135. What operation can be performed through a JMX agent enabled in a Mule application?  View object store entries  Replay an unsuccessful message  Set a particular tog4J2 log level to TRACE  Deploy a Mule application Q136. Refer to the exhibit.An organization uses a 2-node Mute runtime cluster to host one stateless API implementation. The API is accessed over HTTPS through a load balancer that uses round-robin for load distribution.Two additional nodes have been added to the cluster and the load balancer has been configured to recognize the new nodes with no other change to the load balancer.What average performance change is guaranteed to happen, assuming all cluster nodes are fully operational?  50% reduction in the response time of the API  100% increase in the throughput of the API  50% reduction In theJVM heap memory consumed by each node  50% reduction In the number of requests being received by each node Q137. An organization has deployed runtime fabric on an eight note cluster with performance profile. An API uses and non persistent object store for maintaining some of its state dat a. What will be the impact to the stale data if server crashes?  State data is preserved  State data is rolled back to a previously saved version  State data is lost  State data is preserved as long as more than one more is unaffected by the crash Q138. What aspect of logging is only possible for Mule applications deployed to customer-hosted Mule runtimes, but NOT for Mule applications deployed to CloudHub?  To change log4j2 log levels in Anypoint Runtime Manager without having to restart the Mule application  To log certain messages to a custom log category  To send Mule application log entries to Splunk  To directly reference one shared and customized log4j2.xml file from multiple Mule applications Explanation/Reference: https://docs.mulesoft.com/runtime-manager/viewing-log-dataQ139. According to MuteSoft, which principle is common to both Service Oriented Architecture (SOA) and API-led connectivity approaches?  Service centralization  Service statefulness  Service reusability  Service interdependence  Loading … Top MuleSoft MCIA-Level-1 Courses Online: https://www.exams4sures.com/MuleSoft/MCIA-Level-1-practice-exam-dumps.html --------------------------------------------------- Images: https://free.exams4sures.com/wp-content/plugins/watu/loading.gif https://free.exams4sures.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-04-15 14:36:22 Post date GMT: 2023-04-15 14:36:22 Post modified date: 2023-04-15 14:36:22 Post modified date GMT: 2023-04-15 14:36:22