• Call us: +1 (800) 622-0083
  • SQL SERVER ON VIRTUAL MACHINE RESOURCE SETTINGS AND NUMA ARCHITECTURE

/
    • SQL SERVER ON VIRTUAL MACHINE RESOURCE SETTINGS AND NUMA ARCHITECTURE

      Team, I was thinking of a title for this topic but could not find except the above. What I am about explaining in general is how I think we should set SQL Server resources on VMs based on the best practices. I know environments are different but we have similar concepts. In my environment for example, we have post built scripts we use to determine the Min and Max and the MaxDop as well. But does this always work? So, in our discussion today, we are going to look at CPU, Memory, Storage and the Monitoring. But I will first look at the CPU Architecture which includes, single core, multicore and NUMA. Once we understand these then when I start explaining the settings in SQL Server, it will be a smooth ride.

      As I said in my last article, you cannot manage SQL Server efficiently and effectively if you don’t understand the infrastructure SQL Server is sitting on. Therefore, know your infrastructure, then you will understand most challenges that your SQL Server may have (especially performance). That is the more reason why we need to work with Windows Infrastructure Team and try as much as possible to understand the windows server build (from physical to virtual Servers) and the architecture. I am always surprised to see some DBAs configuring VMs alone undermining the host these VMs are sitting. The truth “When developers turned to become DBAs is when these happen and when Infrastructure Engineers crosses to become DBAs, you really see this” because the Infrastructure Engineers already have solid background of what SQL Server is interacting with at the backend... In conclusion, I think at the beginning Infrastructure Engineers makes better Production DBAs while Developers are better Application DBAs and as time progresses, there is convergence of skills.

      Now, let me cover part 1 of this topic for today and will release part 2 in a weeks’ time.

      CPU & NUMA

      We are going to discuss Processors with emphasis on CACHES because that is our area of interest. When we understand the concept, then we can talk about performance and configuring SQL Server in a VM.

      Not until 1988 when 386SX was released, all Processors manufactured never had Cache. Most systems then had below 16 MHz (8088, 8086, 286, 386) and Systems below 16MHz usually had no cache memory at all.  Prior to the 486 processor, the cache on the motherboard was the only cache used in the system.

      Starting with the 486 series, processors began including what was called L1 (Level 1) cache directly on the processor. This meant that the L1 cache always ran at the full speed of the chip, especially important when the later 486 chips began to run at speeds higher than the motherboards they were plugged into. During this time the cache on the motherboard was called the second level or L2 cache, which ran at the slower motherboard speed.

      Wait a minute!! What is or why Cache? Yeah you think you know it… ok let’s look at meaning critically. Caches are small pools of memory that store data the CPU is most likely to need next. Today almost all CPUs have L1 or L2 Caches because of the performance benefits which cannot be ignored. The goal of the cache system is to ensure that the CPU has the next bit of data it will need already loaded into cache by the time it goes looking for it often called cache hit.

      cache miss, on the other hand, means the CPU has to go hurrying off to find the data elsewhere. This is where the L2 cache comes into play — while it’s slower, it’s also much larger. Some processors use an inclusive cache design (meaning data stored in the L1 cache is also duplicated in the L2 cache) while others are exclusive (meaning the two caches never share data). If data can’t be found in the L2 cache, the CPU continues down the chain to L3 (typically still on-die), then L4 (if it exists) and main memory (DRAM).

      I will use the diagram below to explain to you what I am talking about:

      rom the above diagram on the left, this is a single Core. This is what old systems used to have. The CPU (green) which contains Core Registers, L1, L2 and L3 caches is a piece of chip that goes into the socket on the motherboard. Then there is a board that contains main memory which is the RAM and the Hard Disk.

      This is no longer use these days. But we have multi-core hierarchy which is for example:

      MULTICORE ARCHITECTURE (Storage Heirachy)

      The single CPU may have 4 CPUs and each core has its on L1 and L2 and its registers. Then there is a shared L3 cache. The L3 shares data between the four cores. Example, if data are not available in L1, it goes to L2 and they all go to L3 which share data among the CPU. This is a single Core Storage Architecture. What happens is, 1 CPU internally extended to accommodate 4 cores and L1 and L2. We have only one Physical CPU which is extended to share data.

      The other I would want us to look at is the NUMA architecture

      NUMA ARCHITECTURE (None-Uniform Memory Access)

      NUMA means the board contains two or more CPUs and each of the CPUs has 4 or more cores. NUMA systems are advanced server platforms with more than one system bus. Like before we only had one CPU with Cores and now we have banks of memory depending on which CPUs is accessing the memory. So how do application (SQL Server) access these L1, L2, L3 and the main memory depends on our configuration.

      SQL Server is NUMA aware. NUMA can match memory with CPUs through specialized hardware (hardware NUMA) or by configuring SQL Server memory (soft-NUMA). If you run SELECT DISTINCT memory_node_id FROM sys.dm_os_memory_clerks and it returns 0 node, either you do not have hardware NUMA, or the hardware is configured as interleaved (non-NUMA). If you think your hardware NUMA is configured incorrectly, contact your hardware vendor to enable NUMA. SQL Server ignores NUMA configuration when hardware NUMA has four or less CPUs and at least one node has only one CPU. No no no!! let me go back to my focus! Lets try to understand how data are moved around from disk to memory and cpu We know that Data are stored in Hard Disks as BLOCKs, So each time you save that data or save your letter, it goes stored it in the hard disk as blocks. Frequent deletes of documents and files results to fragmentations which will then requires defragmentation. Now, software communicates between the Hard Disk and the Main Memory. This software can be SQL Server which has Database Buffer. The Database Buffer allocate some memory to data. It sits in the main memory, it controls how memory is allocated. So once database buffer read those blocks, its no longer call blocks, they are now call page or PAGES. So in Memory it is called a PAGE and on the Hard Disk it is called Blocks (same 8K).  

      PLEASE WATCH OUT FOR PART 2…. CONFIGURING SQL SERVER MEMORY, CPU, STORAGE AND MONITORING (VIRTUAL MACHINES)