Friday, July 4, 2014

Factors need to be consider for vCPU to pCPU ratios in VMWare

Although, there are several factors need  to be considered for right vCPU to pCPU ratios. Following are high level guide lines as per VMWare.
1. Range of 6:1 to 8:1, even though theoretically it is possible to allocated 25:1
2. keep the CPU Ready metric at 5% or below

The actual ratio may depend on following factors.

1. Based on the vSphere version, if vSphere version is a recent version, more consolidation is possible
2. Based on the Process Age, if the process is a newer  one, higher processor ratios can be acheivable
3. Based on different kind of work loads on the host

In order to come to a realistic ratio which can reduce performance problems due to VMWare based virtualization,  need to monitor Metrics with utilities such as  vScope Explorer which is part of VKernel vOPS Server Explorer.



Tuesday, February 11, 2014

Weblogic Work Manager Usage


Thread utilization of a weblogic server instance can be controlled by defining rules and constraints 
and  by defining a Work Manger. Work Manager constraints can be applied   either globally to 
WebLogic Server domain or to a specific application component

Need to use work manager for the thread management for the following scenarios.

1. When one application needs to be given a higher priority over another and default fair share is not sufficient.
2. A response time goal is required
3. To avoid server dead lock by configuring minimum thread constraint


Sunday, February 9, 2014

Service Component Architecture (SCA)

Service Component Architecture (SCA) defines a programming model for composite SOA applications. SCA provides a model for the composition of services and for the creation of service components, including the reuse of existing applications within SCA composites. SCA is based on the idea of service composition
aka orchestration.

SCA specification consists of four main elements.

1. Assembly Model Specification -  This model defines how to specify the structure of a composite application.

2. Component Implementation Specification - This specification define how a component is actually written in a particular programming language.

3. Binding Specification - This specification define how the services published by a component can be accessed.

4. Policy Framework Specification - This specification desribes how to add non functional requirements to services.

More information regarding SCA can be found at http://tuscany.apache.org/documentation-2x/sca-introduction.html




Tuesday, January 28, 2014

Finding Linux Machine CPU Architecture Info

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
CPU socket(s):         4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 47
Stepping:              2
CPU MHz:               2396.863
BogoMIPS:              4793.72
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0-3

Tuesday, January 14, 2014

Performance Testing - Considerations

Performance testing is done to provide  information about  application regarding speed, stability and scalability. In General, performance testing uncovers what needs to be improved before the application goes live. Without performance testing, applications are likely to suffer from issues such as running slow while several users use it simultaneously, inconsistencies across different operating conditions. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workloads. Applications gone live with poor performance metrics due to not having or poor performance testing are likely to gain a bad reputation and fail to meet expected business goals.

Common bottlenecks for the application performance  include not limited to

  1. CPU utilization
  2. Memory utilization
  3. Network utilization
  4. Operating System limitations
  5. Disk usage
In order to ascertain performance of a application during different performance testing activities, need to analyze following parameters.

  1. Processor Usage – Amount of time each processor spends executing non idle threads.
  2. Hit ratios – Hit ration measures the fraction of the traffic that is served from the web Cache. Also, total of number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottle necking issues.
  3. Hits Per Second – The number  of hits on a web server during each second of a load test.
  4. Rollback Segment - The amount of data that can rollback at any point in time.
  5. Database Locks - Locking of tables and databases needs to be monitored and carefully tuned.
  6. Top Waits – These are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
  7. Memory use – Amount of physical memory available to processes on a computer.
  8. Disk time – Amount of time disk is busy executing a read or write request.
  9. Bandwidth – Shows the bits per second used by a network interface.
  10. Committed memory – Amount of virtual memory used.
  11. Memory pages/second – Number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
  12. Network bytes total per second – The rate which bytes are sent and received on the interface including framing characters.
  13. Page faults/second – The overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
  14. CPU interrupts per second – It is the avg. number of hardware interrupts a processor is receiving and processing each second.
  15. Disk queue length – It is the avg. no. of read and write requests queued for the selected disk during a sample interval.
  16. Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottle necking needs to be stopped.
  17. Response time – Time from when a user enters a request until the first character of the response is received.
  18. Private bytes – Number of bytes a process has allocated that can not be shared among other processes. These are used to measure memory leaks and usage.
  19. Throughput – Rate a computer or network receives requests per second.
  20. Amount of connection pooling – The number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
  21. Maximum active sessions – The maximum number of sessions that can be active at once.
  22. Thread Counts – An applications health can be measured by the no. of threads that are running and currently active.
  23. Garbage Collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.
You can use any APM (Application Performance Management), NPM(Network Performance Management)  and Business Transaction Monitoring tool (BTM) tool  to analyze above mentioned parameters.

When selecting a performance monitoring tool, need to consider following factors
  • Which Monitor performance of the database's.
  • Which Monitor the Physical as well  Virtual components of the Infrastructure.
  • Understand and Map all the components involved in the transaction
  • Collect response times for a transaction
· 

Usage of Chunked Streaming Mode Usage in Business Service of OSB 11g and OSB12c

As in the OSB documentation provide by Oracle 

https://docs.oracle.com/cd/E28280_01/dev.1111/e15866/transports.htm#OSBDV288



Chunked Streaming Mode property should be selected if you want to use HTTP chunked transfer encoding to send messages.

Chunked transfer encoding is an HTTP 1.1 specification, and allows clients to parse dynamic data immediately after the first chunk is read. 

Use Checked Streaming Mode  to use HTTP chunked transfer encoding to send messages under HTTP transport configurations for a business service.  Do not use  this option if you have http redirects configured. Also, try to disable this option,  if you are getting


  • Client request get a read timed out error
  • "Request Entity Too Large" appears with The BEA-380000 error in the logs
  • "Request Entity Too Large" appears with the http status code 413 error in the logs (OSB 12c)
  • The last executed OSB instance continuously retries every 5 minutes

If you disable "Checked Streaming Mode"  OSB may double invoke target system for a single invocation.  Also, Quality of Service attribute of Proxy Service default behavior of "Exactly Once" is affected. To fix, you need to check on "Quality of Service" attribute to make it as "Exactly Once" for the -message flow -route node of a proxy service.

As of OSB 12c, oracle HTTP Server 12c is bundeled with Oracle SOA Suite12c and recommended as part of enterprise deployment guide provided by Oracle, we need to disable the chunked streaming mode case by case.  If the business service end point is SOAP based service,  most probably SOAP end point does not require payload in chunks.

Friday, January 10, 2014

Taking Java heap Dump from Command Line

Use jps command  to find out the process id
jdk160_35/bin>./jps
22321 jvmapp
58468 Jps

Use jmap to acquire the dump
jdk160_35/bin> ./jmap -dump:format=b,file=\tmp\my.hprof 22321
Dumping heap to\tmp\my.hprof ...
Heap dump file created