Tuxjm el sitio de jmedina

Author: admin_tuxjm

Xen.org 4.1 Liberado

Así es, ya esta aquí la versión estable de Xen 4.1, despuśe de 11 meses de desarrollo y gracias a las contribuciones de voluntarios y empresas que contribuyen al desarrollo de Xen.org, Aquí les dejo el anuncio oficial (en inglés), ya es hora de probar esta versión y esperemos que se corrijan muchos de los problemas que había en Xen 4.0.

After 11 months of development and 1906 commits later (6 a day !!!), Xen.org is proud to present its new stable Xen 4.1 release. We also wanted to take this opportunity to thank the 102 individuals and 25 organisations who have contributed to the Xen codebase and the 60 individuals who made just over 400 commits to the Xen subsystem and drivers in the Linux kernel.

New Xen Features

Xen 4.1 sports the following new features:

  • A re-architected XL toolstack that is functionally nearly equivalent to XM/XEND
  • Prototype credit2 scheduler designed for latency-sensitive workloads and very large systems
  • CPU Pools for advanced partitioning
  • Support for large systems (>255 processors and 1GB/2MB super page support)
  • Support for x86 Advanced Vector eXtension (AVX)
  • New Memory Access API enabling integration of 3rd party security solutions into Xen virtualized environments
  • Even better stability through our new automated regression tests

Further information can be found in the release notes.

XL Toolstack: Xen 4.1 includes a re-architected toolstack, that is based on the new libxenlightlibrary, providing a simple and robust API for toolstacks. XL is functionally equivalent and almost entirely backwards compatible with existing XM domain configuration files. The XEND toolstack remains supported in Xen 4.1 however we strongly recommend that users upgrade to XL. For more information see the Migration Guide. Projects are underway to port XCP’s xapi and libvirt to the new libxenlight library.

Credit2 Scheduler: The credit1 scheduler has served Xen well for many years.  But it has several weaknesses, including working poorly for latency-sensitive workloads, such as network traffic and audio. The credit2 scheduler is a complete rewrite, designed with latency-sensitive workloads and very large numbers of CPUs in mind. We are still calling it a prototype scheduler as the algorithm needs more work before it will be ready to become the main scheduler. However it is stable and will perform better for some workloads than credit1.

CPU pools: The default credit scheduler provides limited mechanisms (pinning VMs to CPUs and using weights) to partition a machine and allocate CPUs to VMs. CPU pools provide a more powerful and easy to use way to partition a machine: the physical CPUs of a machine are divided into pools.  Each CPU pool runs its own scheduler and each running VM is assigned to one pool.   This not only allows a more robust and user friendly way to partition a machine, but it allows using different schedulers for different pools, depending on which scheduler works best for that workload.

Large Systems: Xen 4.1 has been extended and optimized to take advantage of new hardware features, increasing performance and scalability in particular for large systems. Xen now supports the Intel x2APIC architecture and is able to support systems with more than 255 CPUs. Further, support for EPT/VTd 1GB/2MB super pages has been added to Xen, reducing the TLB overhead. EPT/VTd page table sharing simplifies the support for Intel’s IOMMU by allowing the CPU’s Enhanced Page Table to be directly utilized by the VTd IOMMU. Timer drift has been eliminated through TSC-deadline timer support that provides a per-processor timer tick.

Advanced Vector eXtension (AVX): Support for xsave and xrestor floating point instructions has been added, enabling Xen guests to utilize AVX instructions available on newer Intel processors.

Memory Access API: The mem_access API has been added to enable suitably privileged domains to intercept and handle memory faults. This extents Xen’s security features in a new direction and enables third parties to invoke malware detection software or other security solutions on demand from outside the virtual machine.


During the development cycle of Xen 4.1, the Xen community worked closely with upstream Linux distributions to ensure that Xen dom0 support and Xen guest support is available from unmodified Linux distributions. This means that using and installing Xen has become much easier than it was in the past.

  • Basic dom0 support was added to the Linux kernel and a vanilla 2.6.38 kernel is now able to boot on Xen as initial domain. There is still some work to do as the initial domain is not yet able to start any VMs, but this and other improvements have already been submitted to the kernel community or will be soon.
  • Xen developers rewrote the Xen PV-on-HVM Linux drivers in 2010 and submitted them for inclusion in upstream Linux kernel. Xen PV-on-HVM drivers were merged to upstream kernel.org Linux 2.6.36, and various optimizations were added in Linux 2.6.37. This means that any Linux 2.6.36 or 2.6.37 kernel binary can now boot natively, on Xen as dom0, on Xen asPV guest and on Xen as PV on HVM guest. For a full list of supported Linux distributions seehere.
  • Xen support for upstream Qemu was developed, such that upstream Qemu can be used as Xen device model. Our work has received a good feedback from the Qemu Community, but is not yet in the mainline.

The Xen development community recognizes that there is still some way to go, thus we will continue to work with upstream open source projects to ensure that Xen works out-of-the-box with all major operating systems, allowing users to get the benefits of Xen such as multi-OS support, performance, reliability, security and feature richness without incurring the burden of having to use custom builds of operating systems.

More Info

Downloads, release notes, data sheet and other information are available from the download page. Links to useful wiki pages and other resources can be found on the Xen support page.

Como instalar un certificado raíz SSL en el navegador Chromium

A diferencía de Mozilla Firefox, el navegador Chromium usa la biblioteca Mozilla NSS para el soporte SSL/TLS, para instalar el certificado raíz vamos a requierir el programa certutil parte del paquete libnss3-tools.

Instalamos el paquete libnss3-tools:

$ sudo aptitude install libnss3-tools

Ahora descargamos el certificado raíz:

$ wget http://mail.e-compugraf.com/Compugraf_Root_CA.crt

Instalamos el certificado raíz en el llavero local:

$ certutil -d sql:$HOME/.pki/nssdb -A -t TC -n “compugraf.com” -i ~/Compugraf_RootCAa.crt

Listo, ahora si abra chromium y vaya al sitio seguro para el cual quiere validar la autenticidad mediante el certificado raíz y vea que ya no aparece la tache roja :).

Configuración de Firewall Shorewall de dos interfaces con NAT en Ubuntu Server

Buen día aquí les dejo un nuevo documento que se describe la instalación y configuración de un sistema Firewall para filtrado de paquetes y control de conexiones sobre el sistema operativo Ubuntu Server LTS 8.04.4.

Durante el transcurso del documento realizaremos diferentes tareas de configuración en un Firewall con funciones de enrutado en un sistema con Ubuntu Server LTS 8.04.4, el cual cuenta con dos interfaces de red, una conectada directamente a un router o modem del provedor de Internet y otra conectada al switch de la red local a la que esta conectados varios servidores y PCs de usuario así como impresoras en red. Las tareas más comunes de configuración, administración y monitorización de un Firewall serán descritas en este documento, en la siguiente lista podemos ver las tareas a realizar:

  • Configurar los parametros de red para sistema GNU/Linux dos interfaces de red (Multi-homed)
  • Instalación de los pre requisitos del sistema GNU/Linux para operar como Router y Firewall
  • Instalación y configuración básica del Firewall Shorewall en un sistema con dos interfaces de red
  • Configuración de Ennmascaramiento de IP (MASQUERADING/SNAT) para dar salida a Internet a los usuarios de la LAN de forma segura y controlada
  • Crear reglas de Firewall para Proxy HTTP Transparente
  • Crear reglas de NAT Port Forwarding (DNAT) para redireccionar tráfico desde el Internet a sistemas en la red local
  • Crear reglas NAT One-to-One
  • Configuración de sistema de Logs para el registro de eventos del Firewall Shorewall
  • Monitorizando las conexiones al firewall y ancho de banda
  • Usando los comandos de operación del firewall Shorewall

Para ver el documento en línea vaya a: Configuración de Firewall Shorewall de dos interfaces con NAT en Ubuntu Server.

Si tienen dudas o comentarios acerca del documento no duden en consultarme.

Copyright © 2019 Tuxjm el sitio de jmedina

Theme by Anders NorenUp ↑