Reply to Fog IP Address change on Tue, 19 Jan 2016 14:27:02 GMT   

@Wayne-Workman Just to be sure I understand, you want to know about and why we use proxy servers. And then how to configure linux to use them (??).


          Reply to Fog IP Address change on Tue, 19 Jan 2016 14:27:50 GMT   

@george1421 Not the why part. Just the how for Linux & FOG part. :-) I know why already.


          Reply to Fog IP Address change on Tue, 19 Jan 2016 16:03:34 GMT   

Since most companies that have a proxy server in their environment restrict direct internet access we have to configure linux (and fog) to communicate with the internet over the company authorized proxy server(s).

Most command line utilities will inspect the environment variables to check to see if they need to use the proxy protocol when attempting to access files and services on the internet.

These environment variables are http_proxy, https_proxy, and ftp_proxy (I’ve also seen these variables referenced in all upper case like HTTP_PROXY, HTTPS_PROXY and so on. To date I’ve only use the lower case env variables so I can’t say if case is important for all linux distros)

You could add these env variables to each command invocation, but typically system admins will add them to a common logon script so they are available to anyone who logs into the linux system. Most common is to add them to the bash shell logon script /etc/bashrc To make these variables persistent in the environment they must be defined with the export function as below.

export http_proxy=http://<proxy_server_ip>:<proxy_server_port>
export https_proxy=http://<proxy_server_ip>:<proxy_server_port>
export ftp_proxy=http://<proxy_server_ip>:<proxy_server_port>

In the case of the fog installer, we need to tell the fog installer to not use the proxy protocol when attempting to connect to the fog server directly. So we must also include this env variable.

export no_proxy="<fog_server_ip>"

During the fog installation the installer script makes wget calls back into the running fog server for specific actions. Without the no_proxy setting the installer script would make that request to the proxy server. Some proxy servers won’t proxy requests to internal networks. So this setting is required.

There are some command line commands that don’t inspect the env variables but require specific settings in their config files. These include FOG, svn (I assume git too), cpan, and pear. For these you will need to update the appropriate config file. For FOG (proper) you need to update the proxy server settings in the fog management console. For SVN you need to create a file in /etc/subversion called servers and then populate it with the required settings.


          "We'll See If It Happens"   

NASHUA, N.H.—The metaphor of choice for Howard Dean's Internet-fueled campaign is "open-source politics": a two-way campaign in which the supporters openly collaborate with the campaign to improve it, and in which the contributions of the "group mind" prove smarter than that of any lone individual.

Dean campaign manager Joe Trippi has admitted on numerous occasions (including this Slashdot post) that his time in Silicon Valley affected his thinking about politics. "I used to work for a little while for Progeny Linux Systems," Trippi told cyber-guru Lawrence Lessig in an August interview. "I always wondered how could you take that same collaboration that occurs in Linux and open source and apply it here. What would happen if there were a way to do that and engage everybody in a presidential campaign?"

But tonight, at the end of a town hall meeting at Daniel Webster College, is the first time I've seen the metaphor in action. Even if it had nothing to do with the Internet.

At the end of tonight's event, Paul Johnson, an independent voter from Nashua who supported John McCain in 2000 and has supported Dean since May, tells Dean that he's "deeply troubled" by the idea that his candidate is going to turn down federal matching funds and bust the caps on campaign spending. Politics is awash in too much money, Johnson says. Why not take the moral high ground and abide by the current system? That sounds like a great idea until Bush spends $200 million, Dean says. Well, then "challenge him to spend less," Johnson replies. Tell him you'll stay under the spending limits if he does, too. Dean's face lights up. "I'll do that at the press conference on Saturday," he says. "That's a great idea." (Saturday at noon is when Dean is scheduled to announce the results of the campaign vote on whether to abandon public financing.)

I walk over to Dean's New Hampshire press secretary, Matthew Gardner, and tell him his candidate just agreed, in an instant, to announce on Saturday that he'll stay under the federal spending caps for publicly financed candidates, if President Bush agrees to do the same (which, admittedly, is more than a little unlikely.) Gardner looks puzzled, then laughs. "That'll be interesting," he says. "We'll see if it happens."


          Comment on Mount remote filesystem via ssh protocol using sshfs and fuse [Fedora/RedHat/Debian/Ubuntu way] by How do you remotely administer your Linux boxes? - Just just easy answers   
[...] set up FUSE to (auto)mount your remote volume. [...]
          Comment on Install nfdump and nfsen netflow tools in Linux by george   
Answered question 3 by my self :) so let me add one more question, how can i export the nfcapd flows to mysql?
          Comment on Install nfdump and nfsen netflow tools in Linux by george   
Many thanks for the extremely useful article, really solved my problems. Although as it is the first time that I'm using tools like that i have some question and i will be really glad if someone can answer them: 1) Nfsen contains flows and diagrams so i'm almost sure that fproble and nfdump works. Although when i'm going to the folder data/nfsen/profiles-data/live/myrouter/... there are many file like nfcapd.201307201235 and a lot of them are empty (nfdump -r /data/nfsen/profiles-data/live/MYROUTER/2013/07/20/nfcapd.201307201235) so i guess there is something wrong with the command or timeouts. What i can do to fix that? 2) /data/nfsen/profiles-data/live/MYROUTER... is the folder where flows suposed to be saved or there is another folder where nfdump save the flows? 3)until now the files that contains flows have those fields: date flow start, duration, protocol, scr ip : port, dst ip: port, packets, bytes, flows. Although i would like to capture more information that netflow supports like TCP flags. Can someone tell me how to construct my command to capture more info?
          Comment on Tiny bash scripts: check Internet connection availability by Michael Kors Purses jgprtn jox871 - Forums   
[...] Blog fashvydflnek http://www.bostonherald.com/blogs/sp...training-camp/ Q and A: June 17, 2012 Tiny bash scripts: check Internet connection availability - LinuxScrew: Linux Blog The Boston Herald Blogs | Boston [...]
          Comment on 13 Linux lethal commands by Naresh   
Also, never run "mv * "
          Comment on Tiny bash scripts: check Internet connection availability by Michael Kors bags zpvbkc qch862 - Forums   
[...] Winter - sloanie.com v3 How to add a blog to your eshop | 123-reg Support Centre Tiny bash scripts: check Internet connection availability - LinuxScrew: Linux Blog Tour de France stage 2 as it happened | Johnny on a bikeJohnny on a bike Michael Kors handbags [...]
          Linus Explains What Surprises Him After 25 Years Of Linux   
Linus Torvalds appeared in a new "fireside chat" with VMware Head of Open Source Dirk Hohndel. An anonymous reader writes:Linus explained what still surprises him about Linux development. "Code that I thought was stable continually gets improved. There are things we haven't touched for many years, then someone comes along and improves them or makes bug reports in something I thought no one used. We have new hardware, new features that are developed, but after 25 years, we still have old, very ba ...
          Survey Says: Raspberry Pi Still Rules, But X86 SBCs Have Made Gains   
DeviceGuru writes: Results from LinuxGizmos.com's annual hacker-friendly single board computer survey are in, and not surprisingly, the Raspberry Pi 3 is the most desired maker SBC by a 4-to-1 margin. In other trends: x86 SBCs and Linux/Arduino hybrids have trended upwards. The site's popular hacker SBC survey polled 1,705 survey respondents and asked for their first, second, and third favorite SBCs from a curated list of 98 community oriented, Linux- and Android-capable boards. Spreadsheets com ...
          Commentaires sur Stop installing postgres on your laptop : use Docker instead par Anonyme   
2 problems for me with this approach: 1. Shutting down the container will destroy any and all data that I've created, because of (2) 2. I'm running Windows 10 Home with Docker Tools and when I set a volume for postgres, I'll get the next error when it reboots: FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory This has to do with the container using a linux commands trying to bind a *nix folder to a Windows folder, from what I understand. Which is weird, because the Jenkins container runs just fine ¯\_(ツ)_/¯ This means I can't have a folder on windows with my data :(
          Commentaires sur Utiliser le Raspberry Pi comme passerelle OpenVPN pour permettre un accès VPN à tout votre réseau local par Nettlebay   
Bon, ben, c'est encore moi... Pour transformer un fichier *.ovpn en "autologin": 1. allez en root dans /etc/openvpn. 2. Créer un fichier auth.txt. 3. Editer le fichier -toujours en root- et ajouter sur la première ligne votre username et sur la deuxième, votre mot de passe. Tout simplement. Enregistrez. 4. Editez votre myvpn.ovpn -évidemment en root. Rechercher (Ctrl+f) "auth-user-pass". Rajouter à la suite: "/etc/openvpn/auth.txt", ce qui donne: "auth-user-pass /etc/openvpn/auth.txt". Et enregistrez Deux autres trucs: 1. Je lance mon VPN depuis /etc/rc.local depuis plusieurs années et ça fonctionne nickel (pas besoin de NetworkManager). Ajoutez vers la fin de votre rc.local (avant exit 0): openvpn --config /etc/openvpn/myvpn.ovpn (ou le nom de votre ovpn). Essayez! Vous verrez, c'est parfait. Vous êtes déjà sous VPN avant d'arriver au Bureau. 2. Commande pour changer de VPN (dans un lanceur): sh -c "sudo killall openvpn ; sudo openvpn --config /etc/openvpn/monvpn.ovpn". J'en ai toute une batterie! J'ai la folie des VPN! "sudo killall openvpn" stoppe le VPN actif et " sudo openvpn --config /etc/openvpn/myvpnflorida.ovpn" lance le nouveau VPN. Le point-virgule ";" est un caractère d'enchaînement. Même si la première commande rate, la seconde est effectuée. C'est important si vous lancez la commande et qu'aucun VPN n'est actif: la première échoue donc. Mais la seconde peut fonctionner (ça peut arriver). Pour info: VPS à trois sous (heu... $3) et qui marche duTonnerre de Brest: https://bandwagonhost.com. Attention cependant, Google m'a fait un "caca nerveux" avec multiples demandes de Capcha, etc... Je ne dois donc pas être le seul à avoir cette IP... Tant pis pour eux, j'ai adopté Ixquick /Startpage comme moteur de recherche par défaut. Bien fait! Lien: https://www.it-connect.fr/lenchainement-des-commandes-sous-linux/
          Die bessere Mediathek   
English synopsis: MediathekView is a great open source software that makes it easy to scan the fragmented variety of video libraries run by German, Austrian and French public broadcasters. MediathekView ist ein kostenloses Programm (Open Source, für Mac, Linux, Windows), … Continue reading
          Rivendell: El software libre para broadcasters   
 
Parece que se le viene la competencia al Audicom 7, ya que Rivendell, un software de código abierto para ser usado por los broadcaster que usan el sistema operativo GNU/Linux , tiene similitud con este programa ya que sirve de manager, editor y programador de canciones. Les dejamos con algunas características interesantes. Editor de audio : Permite cortar audio dentro y fuera del aire. Además tiene un buen editor que permite establecer el punto In y Out, así como los fade. Herramienta fundamental en las radios.

Continuar leyendo para leer todo el artículo


CD Ripper
: Herramienta que importa canciones de CD's y que evita que el usuario tipee manualmente el artista y el álbum. Además regula el volumen de las canciones (dB) y modificar los canales de salida. También puede usarse en pantallas touch-screen. Amplio soporte para Live assist, con múltiples paneles de sonido disponible para usarse con el dedo. Soporta los formados de sonido PCM y MPEG Layer.
 

Ventanas de ayuda (Logs): Aparte de tener una interfaz simple y amigable, Rivendell posee 2 logs auxiliares . Función manual y automática para reproducciones de música. Configuración de pausas y stop en el transcurso la programación. Además es compatible con audio analógico y digital


Panel de control: Desde una computadora se puede administrar a otras 3 en diferentes cabinas por el panel de Rivendell. Lo resaltante de esto es que tiene una base de datos Backup que permite hacer copias de todos los archivos con solo presionar un botón y recuperar los perdidos con el sistema Restore database.



Especificaciones técnicas: Se necesitará como mínimo un CPU Pentium 4,256 MB de memoria RAM, sistema operativo Linux Professional 9.x, adaptador de audio AudioScience y como opcional un monitor de pantalla táctil para economizar el trabajo del dj o locutor. No estaría mal descargar uno de estos softwares ya que es muy útil, práctico y sobre todo, gratuito.

Web principal: Rivendell
Area de descargas: link
          Speed art inkscape   
speedart dibujando a pigis en inkscape, en un entorno linux fedora con escritorio kde Speed art inkscape
Daniel Reyes
          Projeto Fábula   
Video explicativo do projeto Quase Fbula. tags: blenderdebianfedoragimpinkscapelinuxmintProjeto Fábula
Ricardo Graça
          Linux Games - Urban Terror Review   
Available from www.urbanterror.info or from www.playdeb.net if you're using ubuntu, I take a brief look at one of the more popular games on ... tags: FedoraGamesLinuxOpenReviewSourceTerrorLinux Games - Urban Terror Review
Linux Overmonnow
          Financial Engineer - IHS Markit - Canada   
Experience with Linux and Windows systems. We help decision makers apply higher-level thinking to daily tasks and strategic issues across a host of industries...
From IHS - Fri, 16 Jun 2017 04:44:36 GMT - View all Canada jobs
          Membetulkan Grub Rescue Ubuntu   
Empat hari yang lalu ketika saya menginstall ulang Ubuntu saya, terjadi error grub rescue. Setelah bertanya pada om kita semua (google :v), saya berhasil menemukan solusinya. Dan sekarang saya akan membagikan solusinya bagi Anda yang terkena masalah ini juga. Tapi ini hanya untuk Ubuntu Desktop, tidak untuk Ubuntu Server. :)

Untuk memperbaiki masalah ini, Anda harus menyiapkan Ubuntu Desktop Live CD Anda karena kita akan membutuhkannya.

Catatan: Jika Anda tidak mengikuti perintah-perintah dibawah ini dengan benar, kemungkinan Anda akan mendapatkan error : /usr/sbin/grub-probe: error: cannot stat 'aufs'

Langkah-langkah yang harus Anda lakukan adalah:
1. Lakukan boot melalui Ubuntu Desktop Live CD Anda.
Jika muncul tampilan seperti ini klik "Try Ubuntu"
2. Jika tidak muncul tampilan seperti di atas, bukalah Terminal dengan menekan:
    CTRL + ALT + T

3. Ketik perintah ini di terminal: sudo fdisk -l
    Perintah tersebut akan menampilkan seluruh partisi Anda.










     Partisi dengan System berupa Linux adalah partisi untuk Ubuntu Anda.
     Pada screenshot partisi tersebut berada di /dev/sda11

4. Mount partisi ubuntu Anda dengan mengetikkan perintah:
    sudo mount /dev/sdXX /mnt  (contoh: sudo mount /dev/sda11 /mnt)

5. Lalu ketik perintah:
    sudo mount /dev/sdXX /mnt/boot

6. Mount virtual filesystems:
    sudo mount --bind /dev /mnt/dev 
    sudo mount --bind /proc /mnt/proc
    sudo mount --bind /sys /mnt/sys

7. Agar hanya grub utilities dari Live CD saja yang dieksekusi, ketik:
    sudo mount --bind /usr /mnt/usr
    sudo chroot /mnt

8. Lalu ketik:
    update-grub
    update-grub2

9. Install ulang grub
    grub-install /dev/sdX (contoh: grub-install /dev/sda tidak perlu menggunakan angkanya)

10. Verifikasi grub yang diinstall
     sudo grub-install --recheck /dev/sdX

11. Keluar dari chroot dengan menekan CTRL + D

12. Unmount virtual filesystems
     sudo umount /mnt/dev
     sudo umount /mnt/proc
     sudo umount /mnt/sys
     sudo umount /mnt/boot

13. Unmount Live CD /usr
     sudo umount /mnt/usr

14. Umount device terakhir
     sudo umount /mnt

15. Reboot
     sudo reboot

Sekian dari saya untuk hari ini, sampai jumpa lagi dilain topik. ^_^

Sumber : http://opensource-sidh.blogspot.com/2011/06/recover-grub-live-ubuntu-cd-pendrive.html
          Cara Install LAMP di Ubuntu   
Sedikit review tentang Lamp dari wikipedia :
LAMP adalah istilah yang merupakan singkatan dari Linux, Apache, MySQL dan Perl/PHP/Phyton. Merupakan sebuah paket perangkat lunak bebas yang digunakan untuk menjalankan sebuah aplikasi secara lengkap.
Komponen-komponen dari LAMP:
  • Linux sebagai sistem operasi
  • Apache HTTP Server sebagai web server
  • MySQL sebagai sistem basis data
  • Perl atau PHP atau Pyton sebagai bahasa pemrograman yang dipakai
 Jika Anda sudah tahu tentang Lamp tetapi masih bingung cara menginstallnya, mari kita lakukan bersama sesuai tutorial di bawah ini :

Pertama buka terminal Anda (Applications >> Accessories >> Terminal) lalu ketikkan perintah di bawah ini :
$ sudo apt-get install lamp-server^
Catatan: tanda (^) bukan merupakan kesalahan pengetikan.

Jika penginstalan sudah selesai akan muncul sebuah prompt permintaan password root MySQL. Isi dengan password yang Anda inginkan. Lalu akan muncul lagi sebuah prompt, kali ini permintaan confirm password. Ketikkan kembali password Anda tadi. Setelah itu, Lamp akan kembali menginstall.

Jika Lamp sudah selesai di install, buka browser Anda dan ketikkan http://localhost/ pada url bar Anda. Jika muncul tulisan dengan header "It Works!" berarti instalasi Anda sukses.

Selanjutnya, untuk menginstall phpmyadmin ketikkan perintah berikut pada terminal Anda :
$ sudo apt-get install libapache2-mod-auth-mysql phpmyadmin
Saat installasi phpmyadmin dimulai, akan muncul prompt sebanyak empat kali. Lakukan 4 hal di bawah ini untuk tiap-tiap prompt yang keluar :
  1. Berupa pilihan. Pilih apache2 lalu tekan Enter.
  2. Berupa pilihan lagi, gunakan tombol Tab untuk memilih dan pilih Yes.
  3. Sebuah prompt permintaan password root untuk MySQL Anda, isi dengan password yang Anda inginkan.
  4. Sebuah prompt permintaan konfirmasi password, isi dengan password root MySQL Anda.
Setelah installasi phpmyadmin selesai, buka browser Anda dan ketikkan http://localhost/phpmyadmin untuk melihat halaman phpmyadmin Anda.

Sekian dari saya, semoga bisa bermanfaat untuk kita semua. ^_^

          Cara Upgrade Mozilla Firefox di Ubuntu   
Anda baru pertama kali menggunakan linux ubuntu? Bingung cara upgrade mozilla firefoxnya? Tenang, Anda sudah berada di tempat yang tepat kok. Disini saya akan menjelaskan bagaimana cara mengupgrade mozilla firefox di ubuntu agar firefox Anda selalu up to date.. Hehehe

Mengupgrade firefox dengan cara ini tidak akan menghilangkan bookmark dan data history Anda, tetapi mungkin akan ada beberapa add ons yang tidak bisa berfungsi lagi jika add ons tersebut tidak support dengan firefox kita yang baru.

Tampaknya Anda sudah tidak sabar ya, baiklah kalau begitu kita mulai saja tutorialnya.. ;)

Cara ini sebenarnya sangat sederhana, buka terminal Anda. (Application > Accesories > Terminal)
Lalu ketikkan perintah ini :
$ sudo apt-get update
$ sudo apt-get install firefox
Penjelasan :
  • $ sudo apt-get update : "perintah ini berfungsi untuk mencari semua data update untuk semua program yang ada di komputer Anda. Tetapi perintah ini tidak langsung menginstallnya secara otomatis."
  • $ sudo apt-get install firefox : "perintah ini berfungsi menginstall firefox menggunakan data update yang sudah dicari menggunakan perintah sebelumnya."

Sekian tutorial dari saya, semoga tutorial di atas bermanfaat bagi kita semua. :D

          Cara Install Modem di Ubuntu 9.10   
Disini saya akan menggunakan wvdial jadi sebaiknya download dulu wvdial disini:
  1. libuniconf4.4_4.4.1-0.2ubuntu2_i386.deb
  2. libwvstreams4.4-base_4.4.1-0.2ubuntu2_i386.deb
  3. libwvstreams4.4-extras_4.4.1-0.2ubuntu2_i386.deb
  4. libxplc0.3.13_0.3.13-1build1_i386.deb
  5. wvdial_1.60.1_i386.deb
Setelah anda mendownload wvdial silahkan install 4 file tersebut melalui terminal. Misalkan Anda mendownload file tersebut di folder Downloads, maka:


~$ cd Downloads
~$ sudo su

# sudo dpkg -i libxplc0.3.13_0.3.13-1build1_i386.deb
# sudo dpkg -i libwvstreams4.4-base_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i libwvstreams4.4-extras_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i libuniconf4.4_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i wvdial_1.60.1_i386.deb

# exit
Istilah-istilah:
  • Cara buka terminal. Klik Application –> Accessories –> Terminal.
  • Biasanya saat di terminal ada kalanya membutuhkan password. Dan password yang anda ketikkan tidak akan kelihatan jadi teruskan saja ketikkan password anda. Kemudian tekan Enter dan kalau sukses maka perintah yang anda masukkan akan dieksekusi. Dan tidak meminta password lagi.
CARA #1
  1. Sebelum install cabut semua usb device di komputer Anda kecuali mouse. :D
  2. Setelah itu, colok modem Airflash/apapun modem anda ke komputer. Tunggu sampai muncul drive baru seperti biasa ketika kita colok USB Flash Drive.
  3. Kemudian klik kanan drive baru tersebut klik EJECT bukan yg lain.
  4. Buka terminal dan ketikkan perintah berikut lsusb
  5. Maka akan muncul beberapa baris kalimat seperti ini :
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 003: ID 21f5:2008
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 003: ID 0ac8:3343 Z-Star Microelectronics Corp. Sirius USB 2.0 Camera
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  6. Perhatikan yang saya bold. 21f5 = id vendor sedangkan 2008 adalah id produk. Kalau modem Anda tipe lain, biasanya seperti id vendor dan id productnya akan berbeda.Untuk mengenali id vendor modem biasanya sih tidak ada embel apa-apa lagi di belakangnya. Atau kalaupun ada embel2 lagi biasanya qualcomm, dll dari perusahaan telekomunikasi.
  7. Masih di terminal dan ketikkan kode berikut gedit /etc/udev/rules.d/99-evdo-modem.rules
  8. Akan terbuka seperti notepad setelah itu copy dan paste kode berikut SYSFS{idVendor}==”21f5”, SYSFS{idProduct}==”2008″, RUN+=”/usr/bin/eject %k kemudian save.Note: Pada {id vendor} dan {id product} sesuai modem Anda.
  9. Kemudian ketikkan perintah ini di terminal sudo modprobe usbserial vendor=0x21f5 product=0×2008
  10. Selanjutnya ketikkan dmesg
  11. Saya asumsikan Anda sudah menginstall wvdial, kalau belum maka Anda harus install wvdial dulu sebelum bisa melangkah ke step selanjutnya. Kalau sudah maka step ini bisa dilanjutkan. Ketikkan perintah sudo gedit /etc/wvdial.conf maka akan muncul seperti notepad dan selanjutnya tambahkan kode berikut ini (jangan hapus kode yg sudah ada disana) :
    [Dialer flexi]
    Auto DNS = on
    Init1 = ATZ
    Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    Stupid Mode = yes
    Modem Type = Analog Modem
    ISDN = 0
    New PPPD = yes
    Phone = #777
    Modem = /dev/ttyUSB0
    Username = *******@free
    Password = ******
    Baud = 460800
    Dial Command = ATDT
    FlowControl = CRTSCTS
    Ask Password = 0
    Stupid Mode = 1
    Compuserve = 0
    Idle Seconds = 3600
  12. Sekarang jalankan perintah : sudo wvdial flexi
  13. Jika muncul tulisan berikut ini :
    –> WvDial: Internet dialer version 1.60
    –> Cannot get information for serial port.
    –> Initializing modem.
    –> Sending: ATZ
    ATZ
    OK
    –> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    OK
    –> Modem initialized.
    –> Idle Seconds = 3600, disabling automatic reconnect.
    –> Sending: ATDT#777
    –> Waiting for carrier.
    ATDT#777
    CONNECT
    –> Carrier detected. Starting PPP immediately.
    –> Starting pppd at Sun Jan 3 15:19:52 2010
    –> Pid of pppd: 2054
    –> Using interface ppp0
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> local IP address 1**.1*.1**.2** –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> remote IP address 1*.1*.2*.* –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> primary DNS address 1**.1**.1**.1** –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> secondary DNS address 1**.1**.1**.1** –> ini mungkin berbeda
    –> pppd: ?u` @l`Itu berarti anda sudah terkoneksi ke internet. Dan masih di terminal tekan CTRL + Shift + T kemudian ketik perintah berikut : ping -s -t yahoo.com
  14. Untuk disconnect atau mematikan koneksi tersebut anda cukup tekan CTRL + C
Cara ini juga bisa digunakan pada modem usb lainnya.. :D
Semoga bermanfaat..

          Acer As4738G-372G32Mn   
Acer As4738G-372G32Mn
Core i3-M370 2.40Ghz
DDR3 2Gb
320Gb
Ati Rodeon HD 5470/ 731Mb
Linux
14.0" HD Led LCD
Batt 6 CEll
DVDRW
Webcam, Card Reader, Bluetooth, HDMI

READY STOK
Hubungi Facebook : 
http://www.facebook.com/?ref=home#!/profile.php?id=1813729905
             
ACER 4738-482G50Mn

ACER 4738-482G50Mn

CORE I5-M480 2.67GHZ -DDR3 2GB -500GB -INTEL HD GRAPHICS/635MB, DEDICATED 128MB -BATT 6 CELL -WEBCAM, CARD READER, BLUETOOTH, HDMI -LINUX -14.0" HD LED LCD -DVDRW
Rp 5.550.000
READY STOK 
Hubungi Facebook :
http://www.facebook.com/?ref=home#!/profile.php?id=1813729905
          Acer AS4738-372G32Mn BT   
Acer AS4738-372G32Mn (SeniorNet)



Acer AS4738-372G32Mn BT
Procesor : i3-M370 2.40GHz
RAM : 2DDR3
HDD : 320
VGA :intel HD Graphic/ 675 MB
Feature : WC, CR, HDMI, BT
OS : Linux
Display : 14.0"HD Led LCD 
Optical : DVDRW
Warna : Black
Harga : Rp5.050.000
READY STOK 
Hubungi : http://www.facebook.com/?ref=home#!/profile.php?id=1813729905



          Acer AS4738Z-P622G32Mn   

Acer AS4738Z-P622G32Mn
Procesor : P6100 2.0GHz
RAM : 2DDR3
HDD : 320
VGA :intel HD Graphic/ 123 MB
Feature : WC, CR, HDMI, BT
OS : Linux
Display : 14.0"HD Led LCD 
Optical : DVDRW
Warna : Black
Harga : Rp 4.550.000
+ Tas
READY STOK
hubungi Facebook : http://www.facebook.com/?ref=home#!/profile.php?id=1813729905
          di jual acer AS4738Z-P611G32Mn   
di jual acer AS4738Z-P611G32Mn ,Rp 4.450.000

Intel® Pentium® processor P6100 (3 MB L3 cache, 2.0 GHz)
14" HD Acer CineCrystal™ LED-backlit TFT LCD
1 GB DDR3, 320 GB HDD, DVD Super Multi Double Layer Drive
...2-in-1 Media Reader, HDMI™ port with HDCP support, Intel HD Graphics
Integrated Wireless WiFi Link 802.11b/g/Draft-N
Acer Crystal Eye 1.3MP webcam, Built-in Speaker
Linux Operating System
Color: Black
Lihat Selengkapnya

          「sudoコマンドの脆弱性はSELinuxが有効になっていたため」が誤解であるワケ   

連載「OSS脆弱性ウォッチ」では、さまざまなオープンソースソフトウェアの脆弱性に関する情報を定期的に取り上げ、解説していく。2017年5月30日、Linuxのsudoコマンドに全特権取得の脆弱性が報告された。連載初回は、こちらの詳しい説明と情報をまとめる。


          【 stat 】コマンド――ファイルの属性や日付などを表示する   

本連載は、Linuxのコマンドについて、基本書式からオプション、具体的な実行例までを紹介していきます。今回は、ファイルの属性や日付などの情報を表示する「stat」コマンドです。


          Linuxカーネルに見る、システムコール番号と引数、システムコール・ラッパーとは   

C言語の「Hello World!」プログラムで使われる、「printf()」「main()」関数の中身を、デバッガによる解析と逆アセンブル、ソースコード読解などのさまざまな側面から探る連載。前回から、printf()内のwrite()やint $0x80の呼び出しについてLinuxカーネルのソースコード側から探ってきた。今回は、さらにシステムコールについて学ぶ。


          Into The Light DEMO is here!!   
After a couple of months of hard work, I'm finally able to release a DEMO for "Into The Light", an indie game I'm developing and I plan to publish around August! :D




Explore a Sci-Fi world full of puzzles

Wake up into a strange world, everything around you is completely dark, looks like an empty place, but, is it really?
Travel around this world, exploring a series of mazes, avoiding the dangers the mazes have, while you try to solve the different puzzles using a diverse sets of mechanics that your flare (the only resource you have) provides you.



“Into The Light”, is an upcoming indie game for PC, Mac and Linux. The worlds are full of mystery and suspense, with only an anonymous entity that speak to you from an unidentified place. Those questions and messages from that strange voice are the only clues you have about why are you in this place, and even more, who you really are.

What is this thing that the voice keeps asking about? Why is it so important for you to remember?


Into The Light, tries to take the player into a journey of wondering, letting the player discover the meaning and importance behind the words of this strange voice.

Would you be able to find your way out?

Try it out: Download "Into The Light" DEMO!
          Leftovers and Linux   
Since these two topics seem to have consumed most of my day it was quite difficult to separate my thoughts when considering my blog post. So….I give you both! LOL. Those who know me are aware of my geekiness. Yep! I’m a closet geek and I’m the biggest fan of the Linux operating system. When […]
          matlab软件下载学习汇总   
matlab软件汇总 2017版matlb MATLAB-2016b安装包(三种版本:Mac、Linux、Windows) MATLAB2016b安装教程 MATLAB2015B MATLAB_R2014a安装教程及破解方法 MATLAB_R2012a安装教程及下 免安装Matlab 6.5 7.0 7.8 绿色破解U盘便携移动版 学习资料 MATLAB R2014a 完全 ...
          リーナスさん自身が主導したかどうかは知り…   

リーナスさん自身が主導したかどうかは知りませんが、Linuxはユニコードの採用が比較的早かった。1995年に来日されたときに、Linuxの多言語化について質問した人がいましたが、リーナスさんはそれはアプリの問題でカーネルの問題ではない、しかし近い将来ユニコードが普及すればその問題の大半は解決するみたいな答えでした。実際にユニコードのためにカーネルが書き換えられたのは2.6から、2002年のことです。

http://linuxjf.sourceforge.jp/JFdocs/kernel-docs-2.6/unicode.txt.html

リーナスさんはスウェーデン系のフィンランド人なので、ドイツ語やフランス語を使う人、アジアの言語を使う人と同じく、特殊記号を含む文字コードの問題について理解しやすかったと思います。


          リーナス・トーバルズ氏、LinuxのGPLv3移行…   

リーナス・トーバルズ氏、LinuxのGPLv3移行に今なお反対

http://www.itmedia.co.jp/enterprise/articles/0801/09/news075.html

 

上記のニュースソースは少々古いものですが、現在でもLinuxカーネル(中核を成す部分)はGPLv2で配布されていて、氏の意向は守られています


          JUNIOR IT ASSISTANT FOR VFX COMPANY - Goldtooth Creative Agency Inc. - Vancouver, BC   
Experience in Windows and or Linux Networking, including Active Directory, DHCP, DNS. Disaster Recovery planning, implementation and testing....
From Indeed - Tue, 27 Jun 2017 20:13:00 GMT - View all Vancouver, BC jobs
          Everything | Gameplay Film   

Hey friends, this is my new project Everything, it's available now for PC, Mac, Linux & PS4. Thank you.

🌎 Steam [PC Mac Linux] ▶️ http://store.steampowered.com/app/582270
🌏 PlayStation 4 (NA) ▶️ http://play.st/everything
🌍 DRM free + Steam key ▶️ https://davidoreilly.itch.io/everything

More info - http://everything-game.com

Music © Ben Lukas Boysen 2017
Alan Watts recordings used with permission © Alan Watts 1973

find me @
http://davidoreilly.com / https://twitter.com/davidoreilly / http://instagram.com/davidoreilly / https://www.facebook.com/thedavidoreilly

Cast: David OReilly


          NILFS on Jaunty   
The latest release of Ubuntu includes the long awaited Ext4 FS (works flawlessly on my system).
Ext4 is faster & more secure but still lacks the ability to manage FS snapshots (ZFS excels in that, but runs only in FUSE on linux).
An interesting alternative is NILFS:
NILFS is a log-structured file system supporting versioning of the entire file system and continuous snapshotting which allows users to even restore files mistakenly overwritten or destroyed just a few seconds ago.


NILFS maintains a repo for Hardy, the Jaunty repos contain only the user land tools which won't do us much good since we need the kernel module as well, this leaves us only with the option of installing it from source (still quite easy).


# a perquisite
$ sudo aptitude install uuid-dev
# installing kernel module, result module resides in /lib/modules/2.6.28-11-generic/kernel/fs/nilfs2/nilfs2.ko
$ wget http://www.nilfs.org/download/nilfs-2.0.12.tar.bz2
$ tar jxf nilfs-2.0.12.tar.bz2
$ cd nilfs-2.0.12
$ make
$ sudo make install
# installing user land tools
$ wget http://www.nilfs.org/download/nilfs-utils-2.0.11.tar.bz2
$ tar jxf nilfs-utils-2.0.11.tar.bz2
$ cd nilfs-utils-2.0.11
$ ./configure
$ make
$ sudo make install

Creating a file system on a file (ideal for playing around):

$ dd if=/dev/zero of=mynilfs bs=512M count=1
$ mkfs.nilfs2 mynilfs

The FS is only a mount away:

# mounting the file as a loop device
$ sudo losetup /dev/loop0 mynilfs
$ sudo mkdir /media/nilfs
$ sudo mount -t nilfs2 /dev/loop0 /media/nilfs/

Now lets create a couple of files:

$ cd /media/nilfs
$ touch 1 2 3
# listing all checkpoints & snapshots, on your system list should vary
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 cp - 12 6
# creating a snapshot
$ sudo mkcp -s
# 15 is the new snapshot (mode is ss)
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 ss - 12 6
16 2009-05-01 19:08:59 cp i 8 6

# our post snapshot file
$ touch 4

Now lets go back in time into our snapshot, NILFS enables us to mount old snapshots as read only FS (while the original FS is still mounted):

$ sudo mkdir /media/nilfs-snapshot
$ sudo mount.nilfs2 -r /dev/loop0 /media/nilfs-snapshot/ -o cp=15 # only snapshots works!
$ cd /media/nilfs-snapshot
# as we might expect
$ ls
1 2 3

NILFS has some interesting features, its not production ready yet however it sure worth looking after its development.
          VMware recovery methods   
Iv been using VMware server and player for a time now and usually the hosted OS is XP, it happens to be that iv had the "luck" to witness the crash/corruption of such images and have devised two nifty tricks/methods of data recovery that i think worth sharing.
The first is for data recovery for an unbootable image, it matches cases in which you don't seem to be able to boot XP up but the image itself is readable & loaded by VMware (you see the black screen loading), the basic idea is to change the first boot device of the image to a Linux live cd (Ubuntu works perfectly), after booting up all you need to do is to mount the NTFS folder and copy it out (by scp, thumb drive, etc..).
The second one is more complex and comes to handle a case in which the image dose not load at all (vmware complains about a corrupted vmx file), in order to recover from this state there are a couple of prequisits:

  • VMware server installed.

  • You have another working XP image with a vmx file.

  • The corrupted image has a snapshot of a working state


The solution in this case is to copy the working vmx file into the folder of the corrupted image, load this image in VMware server and replace all the hard drive entries with the corrupted machine images, now all that is done is to revert to the last saved snapshot and see how the system is restored!
I hope that these two little methods help you as much as they helped me.
          Keeping your Zen with Linux   
Creative mp3 players seemed to be a reasonable choice in the early IPod mania days, the single main problem that iv had for years with my trusty Nomad Zen Xtra is the fact that there was no Linux interoperability what so ever, the only (somewhat shaky) solution was to wait for Gnomad to mature.
Things has changed and the Zen access has worked with Gnomad, but two major issues remained:
  • No automation possible (like latest podcast copying with a matching playlist generated).

  • Ugly and limiting UI to deal with.


After a bit digging iv found mtpfs which is a FUSE file system that can mount MTP devices, installation and usage are simple as:

$ sudo aptitude install mtpfs
$ sudo mkdir /media/zen
$ sudo mtpfs -o allow_other /media/zen/ # allowing other enables non root access to zen folders
$ ls /media/zen
Music My Playlists Playlists WMPInfo.xml

The Music folder holds all the mp3 files in a flat structure and the Playlists folder hold m3u files in the following format (the player root is /):

/Music/Outlaws64.mp3
/Music/uupc_s01e18_high.mp3
/Music/JavaPosse217.mp3
/Music/seradio-episode117-branSelicOnUML.mp3
/Music/CyberSpeak_98_Nov_16_2008.mp3

Its quite easy to see how to automate the podcast download and copying combined with podget utility, my Zen has been restored :)
          The spreading Java and Ubunut love   
These days it seems that the Java community is more and more Ubuntu friendly, the integration of Java and Linux (Ubuntu in particular) has been steadily improving making it easy to install (with a bit tinkering) Java 6 update 10, Netbeans and other Java goodies.
The Linux and Java open spirit are starting to merge, these are good times to be a Java developer on Linux systems.

Note:
It seems as if in Ubuntu 8.10 running sudo apt-get install sun-java6-jdk installs 6 update 10 by default.
          Ubuntu...I love you, I hate you   

 I have been working on seeing if a .NET 3.5 application will port over to Linux, Ubuntu to be specific. I started with version 9.01, then 9.10 and now 10.04 as I find more and more that I need from Mono. I have a dual boot on a dev box, Windows 7 and Ubuntu.

An upgrade from Ubuntu 9.01 to 9.10 caused my mouse and keyboard to lock up. 

I was able to boot from a 9.10 cd. Then, I upgraded to 10.04 as I needed Mono 2,2. Upgrade worked, lost my windows boot though. it seems grub somehow jumped in and messed up the windows boot.

After Googlign liek crazy and trying this and that, these 2 links finally got me my windows boot back:

http://sourceforge.net/apps/mediawiki/bootinfoscript/index.php?title=Boot_Problems:Boot_Sector

http://support.microsoft.com/kb/927392

So, I am now thinking about trying SuSe instead as I hear\read it's more stable. I think a lot of my pains have been related to learning and getting use to Linux.  

 

 

 

 

 


          Porting .NET 3.5 app to Linux   

Started doing research on porting a .NET 3.5 Windows Service application to Linux last week and today.There is a WinForms maintenance\admin tool that needs to be ported too.

Looking at Mono and related friends. Need to come up with a way to support MSMQ and Windows Workflow Foundation. Looks like the support for this in Mono is in alpha and not stable yet. 

 

 


          How To Set Red hat / CentOS Linux Remote Backup / Snapshot Server   
I've HP RAID 6 server running RHEL 5.x. I'd like to act this box as a backup server for my other Red Hat DNS and Web server. The server must keep backup in hourly, daily and monthly format. How do I configure my Red Hat / CentOS Linux server as remote backup or snapshot server?
          Debian / Ubuntu Linux Install and Configure Remote Filesystem Snapshot with rsnapshot Incremental Backup Utility   
I would like to configure my Debian box to backup two remote servers using rsnapshot software. It should make incremental snapshots of local and remote filesystems for any number of machines on 2nd hard disk located at /disk1 ( /dev/sdb2). How do I install and use rsnapshot on a Ubuntu or Debian Linux server?
          Re: alternativní nástroj   
Na linuxe sa da na zachytenie komunikacie pouzit tcpdump a neskor analyzovat Wiresharkom. tcpdump nedokaze dekodovat USB komunikaciu, preto treba wireshark.

Da sa rovno pouzit wireshark ako na linuxe tak na Windowse. Vratane live monitorovania komunikacie.

Najjednoduchsie je na Linuxe pouzit zastaraly textovy interface. Sice obsah dekoduje iba na minimalnej urovni, ale je tam a netreba k tomu ziadne dalsie nastroje,

          Welcome to the Laughing Horse Blog.   
This is a blog for the Laughing Horse Books and Video Collective. We are a All Volunteer Lead and Run Bookstore. We have a meeting space available for like minded groups to either rent out or use free of charge. There are 2 public access terminals(computers-the public can use for free) running and testing free, open source software, (Ubuntu-Linux), we also send out a Free, Open Wi-Fi signal, so you can bring your own Wi-Fi ready device and surf away. We do enforce a "Safer Space" policy, and expect those who enter the bookstore to work with us on keeping the place a "safer space" to come to, if you have questions about it, please ask the volunteer staffing the shift at the bookstore. So come on down, you might wanna call first, and make sure someones there to let you in. Since we are all volunteers, we are not always able to fill all-shifts, for the whole shift. Thanks for your understanding. Mike-d.(part of the Laughing Horse Collective).

          Automate Azure Shutdown   

Automate Azure Shutdown To save time and money, especially with virtual machines that are not required to operate outside regular business hours (for example, development and test vms), having the ability to shutdown, deallocate and then power back these virtual machines on all within a schedule. As you are billed by the minute for running virtual machines on Microsoft’s Azure, automating a scheduled shutdown then power back on of your virtual machines will save you both time and money. Azure Overview Microsoft’s Azure cloud is one of the leading providers of Infrastructure as a Service (IAAS) as well as Software as a Service (SAAS), Platform as a Service (PAAS) and Disaster Recovery As a Service (DRAAS) cloud provider. It is flexible, supports a huge selection of operating systems,and programming languages. You can run Windows or Linux servers or even now Windows and Linux containers with Docker integration. Azure also allows you to build applications...

The post Automate Azure Shutdown appeared first on SmiKar Software.


          YouSerials.com: Nero Linux 4 - Version 4 ( linux only)   
Nero Linux 4 - Version 4 ( linux only) - youserials.com
          Comment on Daddy’s Little Girl by Stella   
Skype has established its website-centered buyer beta to the world, right after launching it broadly within the U.S. and U.K. previous this four weeks. Skype for Website also now works with Chromebook and Linux for immediate text messaging communication (no video and voice nevertheless, those require a plug-in installment). The expansion in the beta brings assistance for a longer selection of different languages to aid bolster that worldwide user friendliness
          Concentración contra las 65 horas - Ciudad Real   
14/12/2008 12:00
Europa/Madrid
14/12/2008 12:00
Europa/Madrid

El sindicato de CNT Ciudad Real ha convocado concentración en protesta por la posible aprobación por el parlamento europeo de la conocida directiva de
las 65 horas.

El día será el próximo día 14 de diciembre a las 12´00 horas en la Plaza del Pilar.

¡¡ CONTRA EL PARO REPARTO DE RIQUEZA, REPARTO DE TRABAJO !!

¡¡ NO A LAS 65 HORAS !!

 

NI 65 HORAS, NI 48, NI 40. POR LAS TREINTA HORAS SEMANALES

 

Próximamente asistiremos a otra de las retrógradas decisiones que desde la UE nos quieren imponer; la vuelta a la esclavitud con la jornada de las 65 horas.

 

La majadería de las 65 horas viene a dar continuidad a la “ofensiva” capitalista para conseguir aumentar los beneficios a costa de la vida de los trabajadores, prolongando la secuencia emprendida en anteriores reformas laborales que han supuesto el abaratamiento del despido, la precariedad de los contratos… y la actual situación depravante del sector laboral.

 

Con la aprobación de la Directiva, los trabajadores asistiríamos a la pérdida de uno de los mayores logros de la clase trabajadora a lo largo de su lucha sindical, lucha que culminó en 1919 con la consecución de la jornada de las 8 horas en la Huelga de la Canadiense.

 

Pero pasemos a ver las consecuencias tanto previsibles como no previsibles, directas como indirectas que nos afectarán al trabajar 65 horas semanales:

 

  • Pasaremos de trabajar de 8 horas a 13 horas diarias.

  • La libertad y la voluntariedad que se otorga al trabajador para decidir si trabajar 5 horas más al día, es sólo teórica, ya que, al suprimirse este aspecto de la negociación colectiva y dejarlo al arbitrio de un pacto individual, se aboca a los trabajadores a asumir en la práctica cualquier exigencia del empresario, como suele pasar hoy día con las horas extras.

  • Las jornadas laborales excesivas incrementan el riesgo de accidentes laborales y enfermedades, por tanto la disminución de la salud a largo plazo.

  • Mayor dificultad a la hora de conciliar la vida laboral con la familiar. Nuestras vidas girarán entorno al trabajo, se reducirán únicamente al trabajo.

  • Esta Directiva tiene como única finalidad el incrementar los beneficios de las empresas a costa del sudor y la sangre de los trabajadores.

En estos momentos convulsos y confusos, donde nada es lo que parece y existe en la sociedad el lenguaje del desorden, la única respuesta creíble es la que desde hace cien años mantiene la CNT, la que más duramente luchó por la instauración de la jornada de 8 horas en un principio, logrando incluso reducirla a 30 horas semanales en infinidad de sectores.

La aprobación de la Directiva te afectará más tarde o más temprano. CNT te llama a integrarte en una organización obrera en donde todos somos trabajadores, donde las decisiones se toman en conjunto y eres dueño de tu destino.

 

 


          站长须知的14个健康生活细节   

  广大草根站长们,如果你不想过劳死,如果你不想未老先秃顶,如果你不想年纪轻轻就已经挺起将军肚犹如十月怀胎。那么你最好认真看完这篇文章。无限博客说的好:不懂得珍惜自己身体的站长不是好站长。

  虽然这篇文章中不会有什么"干货",但它却可能真正让你受益终生。其中的道理,不用我说,你肯定明白。接下来我会从工作和生活两方面,分别阐述一些有关健康的经验和需要注意的地方。这些经验不仅适合站长,也适合所有"靠电脑为生"的人。

  站长健康工作篇

  1、不要久坐。长时间保持坐姿会引起一系列令人头痛的问题,比如小腹隆起、坐骨神经痛、肛肠问题、颈椎问题等等。应对方法很简单,如果领导允许,用电脑办公的时候可以小站一会。

  2、不要长时间面对电脑屏幕。专家说最好每隔一小时休息一次眼睛,不过这对广大站长来说根本不可能,那些怀揣梦想的草根站长一旦工作起来就会忘记时间,因此我们只能选择折中,建议站长们三个小时休息一次。休息眼睛的时候可以望远、望绿,或者有规律的转动眼球,先顺时针转动360度,再逆时针,重复三遍。

  3、如果感觉脖子累了,不要让脖子转圈。正确的做法应该是用脑袋写"米"字。不要问为什么,专家就是这么说的。

  4、办公桌上最好放点绿色植物,因为它们可以吸收电脑辐射,可以点缀下你的工作环境。或者放个鱼缸,清水也能吸收辐射。有媒体报道说辐射会导致男性不育,站长们,这一点对咱男同胞来说最最重要啊!

  5、工作之余或者下班之后,记得洗把脸,专家说"水会把辐射洗去".

  站长健康生活篇

  1、站长应该多运动。最简单的也是最好的运动方法就是跑步,一定要跑出汗才行,这样可以加速血液循环、新陈代谢。而且据说跑步时人体会分泌一种叫"内啡肽"的物质,这东西在谈恋爱的时候也会产生,很多站长因为太忙找不到女朋友,不如多跑跑步,和自己谈谈恋爱嘛。当然了,这是玩笑话!有条件的最好能到户外运动,实在没空就买台跑步机放家里。没钱又没条件的那就原地慢跑吧,无限博客就是这样做的。

  2、不要在电脑前吃饭,因为这样不利于消化,会损害你的肠胃系统。

  3、站长们,不要因为工作忙就天天吃泡面啊!就是咸菜加馒头也绝对不能吃泡面,专家说泡面中的化学物质要经过30天才能最终被排出体外,30天呢,那些化学物质会在你肚子里产生各种变化各种反应。

  4、戒烟。很多人喜欢一边上网一边吸烟。这可是尼古丁和电脑辐射的双重危害,不可小觑!

  5、心态乐观,积极向上,凡事想得开。没有乐观的心态就趁早就别干站长这一行了,我认识一个朋友,就因为一个小站被百度K掉了,喝了两三天闷酒,逢人就抱怨自己的努力怎样怎样付诸东流了,我说,至于吗?心态不好的话,也会导致很多生理疾病。

  6、多参加社区活动,不要沉迷于网络世界,多和人接触。不要最后网站做好了,你身边的朋友却一个一个的离开了,那样得不偿失。

  7、不要熬夜。个人觉得熬夜对人伤害最大,不说别的,看看那些在网吧通宵打游戏的,很多玩着玩着就死掉了,多恐怖啊!不熬夜的解决办法就是早睡早起,做晨型人,把一天当中的工作都在白天做完。

  8、晚饭后出去散步,舒缓心情,寻找灵感。俗话说:"饭后百步走,能活九十九。"不过不要饭后马上走,40分钟后为宜。

  9、多陪陪爹娘老婆和孩子。亲人才是最重要的,因为他们带给你幸福,幸福是保持身体健康的最好良药。

  综上所述,身体是革命的本钱,各位站长同学不要盲目的为了事业而奋不顾身的摧残自己的身体啊。看看我总结的这几点您有没有中枪,没有的话最好,有的最好尽快改掉。补充一句,本文仅供参考,我说的肯定不是最全的,也肯定不是最适合你的,请结合个人的实际情况,慢慢改善生活习惯。如果您有更多建议,欢迎与无限博客交流。

  作者:无限博客

继续阅读《站长须知的14个健康生活细节》...

分类: 网站运营 | 添加评论(0)

相关文章:


          Anwendungsadministrator (Applikationsbetrieb) (m/w) - NorCom Information Technology - Sindh   
Deine Aufgaben › Du betreibst Linux-Anwendungen auf hochverfügbaren Middlewareplattformen › Du begleitest Veränderungen, die von der Softwareentwicklung,
From NorCom Information Technology - Sat, 17 Jun 2017 07:31:38 GMT - View all Sindh jobs
          CentOS 6.8 image with Qt5.7, Python 3.5, LLVM 3.8   
While trying to bring my setup to package KDevelop standalone for Linux into a shape where it has a nonzero probability of me picking it up again in half a year and actually understanding how to use it, I created a docker base image which I think might be useful to other people trying to package Linux software as well. It is based on CentOS 6.8 and includes Qt 5.7 […]
          KDevelop 5.0 standalone executable for Linux   
I am currently at the KDE sprint in Randa, Switzerland. There are about 40 nice people here, working on making KDE software awesome. This year’s focus is on deploying our software more nicely — on Windows and OS X, but on Linux as well. Especially for user-faced applications like KDevelop, it really makes a big difference whether you use today’s version, or the one from two years ago. Many people […]
          Comment on Beta News – Flash Player NPAPI for Linux by AngryPenguinPL   
Tested on Ubuntu 16.04 x64 with Unity.
          Comment on Beta News – Flash Player NPAPI for Linux by AngryPenguinPL   
I see one issue. This Flash 23 plugin work but not on Twitch page. When I trying watch stream on Twitch I see only back screen and only sound work fine - without video. When I switch to 11.2 Flash - Twitch working fine.
          Comment on Beta News – Flash Player NPAPI for Linux by AngryPenguinPL   
I forgot. Im happy to see to new Flash release for Linux and I want say -Thank you - to Adobe devs. But Im also need this few missing features in new Flash player. Can you guys?
          Comment on Beta News – Flash Player NPAPI for Linux by AngryPenguinPL   
To Adobe devs. Guys, why you release not complete Flash player for Linux? If you doing it, please make it feature sync with Windows and Mac. Give us fully features app, not semi-app but full app with support for DRM and GPU 3D.
          Comment on Beta News – Flash Player NPAPI for Linux by Just in Beaver   
Acrobat Reader? Probably not, unless the demand skyrockets. Why would you want to run it on a FOSS distro? There are alternatives, including but not limited to "Evince." Check your distro's repository and Google for additional FOSS programs for PDFs.
          Comment on Beta News – Flash Player NPAPI for Linux by Hagen   
Unfortunately, doesn't seem to work at our servers (running CentOS 6.8): $ ldd /usr/lib64/flash-plugin/libflashplayer.so > /dev/null /usr/lib64/flash-plugin/libflashplayer.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib64/flash-plugin/libflashplayer.so) /usr/lib64/flash-plugin/libflashplayer.so: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by /usr/lib64/flash-plugin/libflashplayer.so) /usr/lib64/flash-plugin/libflashplayer.so: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /usr/lib64/flash-plugin/libflashplayer.so) Any clues?
          Comment on Beta News – Flash Player NPAPI for Linux by Kenny   
From someone who uses flash player everyday i want to say thank you. Was easy to install, just droped libflashplayer.so in a newly created "plugin" folder at home/myusername/.mozilla and 2 minutes latter I was watching pc principle kick ass.
          Comment on Beta News – Flash Player NPAPI for Linux by CarnivioN   
Does this mean the content debugger of latest version will release? and, what about latest Air runtime for Linux? Will they start supporting that again?
          Comment on Kitchen Rock Star Class for Teenagers {Free Downloads!} by Kermit   
Skype has opened up its online-dependent client beta towards the world, after establishing it extensively from the U.S. and You.K. previously this calendar month. Skype for Internet also now facilitates Linux and Chromebook for immediate messaging conversation (no video and voice yet, individuals demand a connect-in installation). The expansion from the beta contributes help for a longer list of different languages to help you strengthen that overseas usability
          Petunjuk Cara Menginstal Komputer Sendiri   
Dewasa ini komputer telah digunakan secara meluas oleh berbagai kalangan masyarakat. Hal yang sering menjadi pertanyaan awal adalah bagaimana cara menginstal komputer. Berikut adalah langkah-langkah yang bisa dijadikan pedoman umum untuk menginstalasi program komputer, baik itu untuk install pc maupun install laptop.



    Cara Instalasi Komputer Sendiri
  1. Cek kelengkapan hardware. Apakah komponen komputer atau laptop Anda sudah terakit dengan benar, lengkap, dan memenuhi persyaratan minimal untuk menginstal sistem operasi tertentu. Lebih baik lagi jika perangkat keras pendukung seperti kartu jaringan, printer, scanner, dan sebagainya telah terpasang di komputer sebelum memulai instalasi. Tujuannya agar nantinya sistem operasi dapat otomatis mendeteksi secara dini perangkat-perangkat komputer yang Anda miliki.
  2. Instal baru (fresh install) atau instal ulang (reinstall)? Pastikan apakah instalasi akan Anda lakukan pada komputer baru yang belum memiliki sistem operasi dan program komputer, atau dilakukan pada komputer yang sebelumnya sudah memiliki sistem operasi plus program aplikasi dan data-datanya. Proses instal menjadi lebih mudah dan cepat jika dilakukan pada komputer baru. Jika yang akan Anda lakukan adalah instal ulang (karena sistem operasi sebelumnya sudah bermasalah atau ingin berganti sistem operasi lain), maka pastikan data-data file dokumen, gambar, foto, film, dan sebagainya yang telah Anda miliki sudah Anda backup dalam keping CD, flash disk, atau minimal berada di lokasi yang berbeda dengan partisi harddisk untuk instalasi nantinya.
  3. Tentukan sistem operasi yang akan Anda instal ke komputer. Apakah Anda akan menginstal OS Windows XP (Windows terpopuler), Windows 7 (Windows terbaru), atau menginstal Ubuntu Linux (sistem operasi gratis/opensource). Jika ingin menginstal Windows, pastikan Anda telah menyiapkan CD original dari Windows yang Anda beli, termasuk mencatat nomor seri yang harus diisikan saat proses install komputer berjalan. Anda bisa mempertimbangkan menggunakan sistem operasi Ubuntu karena bersifat gratis dan lengkap dengan program aplikasi pendukungnya. Jika komputer Anda tidak memiliki drive CD/DVD (misalnya netbook biasanya tidak memiliki piranti ini), maka pertimbangkan apakah Anda perlu membeli/meminjam drive CD/DVD eksternal atau menggunakan flashdisk sebagai sumber instalasinya. Adapun cara mempersiapkan file installer sistem operasi di keping flashdisk mungkin akan dibahas di posting tersendiri. 
  4. Persiapkan cd driver. Nantinya, setelah sistem operasi terinstal ke komputer, kemungkinan tampilan layar komputer belum optimal, suara tidak terdengar, printer belum bisa dipakai, dan sebagainya. Anda akan memerlukan CD atau disket driver untuk menginstal driver VGA dan sound card, driver printer, serta driver piranti tambahan lainnya. Jika Anda tidak memilikinya, Anda bisa mendownloadnya di situs internet produsen hardware terkait atau mencari alternatifnya dengan menggunakan mesin pencari.
  5. Sediakan software dan program aplikasi komputer yang Anda perlukan. Misalnya, selain sistem operasi, Anda akan membutuhkan program office, software desain grafis, antivirus, aplikasi bantu untuk akses internet, game favorit, dan tool perawatan sistem komputer. Jika Anda menggunakan OS Ubuntu Linux, kemungkinan semua sudah tersedia dan Anda bisa melewatkan tahapan ini.
  6. Tambahkan program lain sesuai kebutuhan. Seiring berjalannya waktu, Anda mungkin juga ingin mengetahui cara menginstal font, memasang flash player, PDR reader, bahkan memerlukan instalasi tool pengembangan seperti XAMPP untuk merancang website. Teknik-teknik instalasinya perlu Anda pahami dan Anda kuasai.
  7. Buang software yang tidak diperlukan. Pastikan Anda bisa membuang instalasi (uninstall) program-program yang telah Anda install sebelumnya. Kemungkinan, beberapa software yang pernah Anda instal atau tanpa sengaja turut terinstal ternyata tidak Anda butuhkan atau bahkan tidak pernah Anda pakai. Dengan membuang program yang tidak diperlukan, selain akan memperbesar ruang kosong harddisk juga akan mempercepat kinerja komputer Anda. 
Mengingat begitu panjang dan banyak prosedur dan langkah yang harus dilakukan untuk setiap tahapan di atas, saya akan mencoba membahasnya satu-persatu di posting tersendiri. Selamat mengikuti dan selamat mencoba!

          Jenis-jenis Database dan Teknologinya    
Pada era komputer dan internet ini, peran database atau basis data sangat dominan. Hampir semua kegiatan administratif di perkantoran dan institusi kini diintegrasikan ke sistem komputasi dengan model database terpadu. Demikian juga, layanan-layanan online di internet juga tidak terlepas dari peran database. Lantas apakah jenis-jenis teknologi yang digunakan untuk mengelola database?



Database Server

Berikut ini adalah daftar jenis-jenis teknologi database, yang sebagian besar merupakan Relational Database Management System (RDBMS):
  • Apache Derby (sebelumnya dikenal sebagai IBM Cloudscape), merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Lazim digunakan di program Java dan untuk pemrosesan transaksi online.
  • IBM DB2, merupakan aplikasi pengolah database yang dikembangkan IBM secara proprietary (komersial). DB2 terbagi menjadi 3 varian, yaitu DB2 untuk Linux - Unix - Windows, DB2 untuk z/OS (mainframe), dan DB2 untuk iSeries (OS/400).
  • Firebird, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Firebird Project. Lazim dijalankan di Linux, Windows dan berbagai varian Unix.
  • Microsoft SQL Server, merupakan aplikasi pengolah database yang dikembangkan oleh Microsoft dan bersifat proprietary (komersial),namun tersedia juga versi freeware-nya. Lazim digunakan di berbagai versi Microsoft Windows.
  • MySQL, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Oracle (sebelumnya Sun dan MySQL AB). Merupakan pengolah database yang paling banyak digunakan di dunia dan lazim diterapkan untuk aplikasi web.
  • Oracle, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh Oracle Corporation. Pengolah database ini terbagi dalam beberapa varian dengan segmen dan tujuan penggunaan yang berbeda-beda.
  • PostgreSQL atau Postgres, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh PosgreSQL Global Development Group. Tersedia dalam berbagai platform sistem operasi seperti Linux, FreeBSD, Solaris, Windows, dan Mac OS.
  • SQLite, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh D. Richard Hipp. Dikenal sebagai pengolah database yang sangat kecil ukuran programnya, sehingga lazim ditanamkan di berbagai aplikasi komputer, misalnya di web browser.
  • Sybase, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh SAP. Ditargetkan untuk pengembangan aplikasi mobile.
  • WebDNA, merupakan aplikasi pengolah database yang bersifat freeware, dikembangkan oleh WebDNA Software Corporation. Didesain untuk digunakan di web.
  • Redis, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Salvatore Sanfilippo (disponsori oleh VMware. Difungsikan untuk jaringan komputer.
  • MongoDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh 10gen. Tersedia untuk berbagai platform sistem operasi dan dikenal telah digunakan oleh situs Foursquare, MTV Networks, dan Craigslist.
  • CouchDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Difokuskan untuk digunakan di server web.

          Petunjuk Cara Menginstal Komputer Sendiri    
Dewasa ini komputer telah digunakan secara meluas oleh berbagai kalangan masyarakat. Hal yang sering menjadi pertanyaan awal adalah bagaimana cara menginstal komputer. Berikut adalah langkah-langkah yang bisa dijadikan pedoman umum untuk menginstalasi program komputer, baik itu untuk install pc maupun install laptop.



    Cara Instalasi Komputer Sendiri
  1. Cek kelengkapan hardware. Apakah komponen komputer atau laptop Anda sudah terakit dengan benar, lengkap, dan memenuhi persyaratan minimal untuk menginstal sistem operasi tertentu. Lebih baik lagi jika perangkat keras pendukung seperti kartu jaringan, printer, scanner, dan sebagainya telah terpasang di komputer sebelum memulai instalasi. Tujuannya agar nantinya sistem operasi dapat otomatis mendeteksi secara dini perangkat-perangkat komputer yang Anda miliki.
  2. Instal baru (fresh install) atau instal ulang (reinstall)? Pastikan apakah instalasi akan Anda lakukan pada komputer baru yang belum memiliki sistem operasi dan program komputer, atau dilakukan pada komputer yang sebelumnya sudah memiliki sistem operasi plus program aplikasi dan data-datanya. Proses instal menjadi lebih mudah dan cepat jika dilakukan pada komputer baru. Jika yang akan Anda lakukan adalah instal ulang (karena sistem operasi sebelumnya sudah bermasalah atau ingin berganti sistem operasi lain), maka pastikan data-data file dokumen, gambar, foto, film, dan sebagainya yang telah Anda miliki sudah Anda backup dalam keping CD, flash disk, atau minimal berada di lokasi yang berbeda dengan partisi harddisk untuk instalasi nantinya.
  3. Tentukan sistem operasi yang akan Anda instal ke komputer. Apakah Anda akan menginstal OS Windows XP (Windows terpopuler), Windows 7 (Windows terbaru), atau menginstal Ubuntu Linux (sistem operasi gratis/opensource). Jika ingin menginstal Windows, pastikan Anda telah menyiapkan CD original dari Windows yang Anda beli, termasuk mencatat nomor seri yang harus diisikan saat proses install komputer berjalan. Anda bisa mempertimbangkan menggunakan sistem operasi Ubuntu karena bersifat gratis dan lengkap dengan program aplikasi pendukungnya. Jika komputer Anda tidak memiliki drive CD/DVD (misalnya netbook biasanya tidak memiliki piranti ini), maka pertimbangkan apakah Anda perlu membeli/meminjam drive CD/DVD eksternal atau menggunakan flashdisk sebagai sumber instalasinya. Adapun cara mempersiapkan file installer sistem operasi di keping flashdisk mungkin akan dibahas di posting tersendiri. 
  4. Persiapkan cd driver. Nantinya, setelah sistem operasi terinstal ke komputer, kemungkinan tampilan layar komputer belum optimal, suara tidak terdengar, printer belum bisa dipakai, dan sebagainya. Anda akan memerlukan CD atau disket driver untuk menginstal driver VGA dan sound card, driver printer, serta driver piranti tambahan lainnya. Jika Anda tidak memilikinya, Anda bisa mendownloadnya di situs internet produsen hardware terkait atau mencari alternatifnya dengan menggunakan mesin pencari.
  5. Sediakan software dan program aplikasi komputer yang Anda perlukan. Misalnya, selain sistem operasi, Anda akan membutuhkan program office, software desain grafis, antivirus, aplikasi bantu untuk akses internet, game favorit, dan tool perawatan sistem komputer. Jika Anda menggunakan OS Ubuntu Linux, kemungkinan semua sudah tersedia dan Anda bisa melewatkan tahapan ini.
  6. Tambahkan program lain sesuai kebutuhan. Seiring berjalannya waktu, Anda mungkin juga ingin mengetahui cara menginstal font, memasang flash player, PDR reader, bahkan memerlukan instalasi tool pengembangan seperti XAMPP untuk merancang website. Teknik-teknik instalasinya perlu Anda pahami dan Anda kuasai.
  7. Buang software yang tidak diperlukan. Pastikan Anda bisa membuang instalasi (uninstall) program-program yang telah Anda install sebelumnya. Kemungkinan, beberapa software yang pernah Anda instal atau tanpa sengaja turut terinstal ternyata tidak Anda butuhkan atau bahkan tidak pernah Anda pakai. Dengan membuang program yang tidak diperlukan, selain akan memperbesar ruang kosong harddisk juga akan mempercepat kinerja komputer Anda. 
Mengingat begitu panjang dan banyak prosedur dan langkah yang harus dilakukan untuk setiap tahapan di atas, saya akan mencoba membahasnya satu-persatu di posting tersendiri. Selamat mengikuti dan selamat mencoba!

          Junior Database Administrator / Report Developer   
RI-East Greenwich, We are currently engaged with a client who is seeking a Junior SQL Database Administrator / Developer on a full-time direct employee basis. In this role, the Database Administrator will: Assist in the maintenance, performance and uptime of SQL and other database instances in a Linux - UNIX environment. Create and modify bash scripts and Windows command scripts. Manage tablespace, storage allocatio
          Staying Ahead of the Curve   
Tenable.io Malicious Code Prevention Report

As malware attacks continue to make headlines, many organizations struggle to stay ahead of the complex, evolving threat landscape. Attackers use both old and new ways to deliver malware through exploiting existing vulnerabilities, evading security solutions, and using social engineering to deliver malicious payloads. Millions of unique pieces of malware are discovered every year, and even with the best security controls in place, monitoring the thousands of endpoints within your network for malware can be nearly impossible.

Use Tenable.io to quickly address systems that are at risk

Once inside your network, malware can disable security controls, gain access to privileged accounts, replicate to other systems, or maintain persistence for long periods of time. If these risks are not addressed quickly, they can result in long term, devastating consequences for any organization. Using the Malicious Code Prevention Report from Tenable.io™ provides you with the visibility needed to quickly address systems that are at risk.

Malicious Code Prevention Report

Malware scanning

Tenable.io includes a customizable malware scan template where you can incorporate both good and bad known MD5 hashes, along with a hosts file whitelist. On Windows systems, hosts files contain commented lines of text that consist of two localhost address entries. Most systems will query local DNS servers to resolve domain names to IP addresses. Some organizations will add entries into hosts files for dedicated systems within their environment or to block unauthorized websites. Once a hosts file is modified, the local system will use the entries within the hosts file first and bypass records within your DNS server.

Malware also targets the hosts file to insert redirects to malicious sites or block security solutions from obtaining patches and security updates. For organizations utilizing the hosts file, the Malware Scan template provides you with the ability to add whitelist entries that would otherwise be flagged as abnormal by existing security solutions within your environment.

Malware Scan template

Enabling the File System Scanning option enables you to scan specific directories within your Windows environment such as the C:\Windows, C:\Program Files, and User Profile directories that are frequently used to install malware. You can also scan malware within directories such as C:\ProgramData that are hidden by default on Windows systems.

Scanning files

Organizations can have any number of mapped drives and devices connected to a system. Most anti-virus solutions only scan default directories such as the C:\ drive, and without additional rules in place, malware could easily bypass this security control via flash drive or external USB drive.

The Malware Scan template provides an additional layer of security to scan network drives and attached devices that may not be targeted by your anti-virus solution

The Malware Scan template provides an additional layer of security to scan network drives and attached devices that may not be targeted by your anti-virus solution. Using the Custom File Directories option, you can include a list of directories within your scan to target mapped drives and attached devices.

Yara rules can also be incorporated into your Tenable.io malware scan. Using a combination of regular expressions, text strings, and other values, Yara will examine systems for specific files that match values within the rules file.

Vulnerabilities

The Malicious Code Prevention report provides a comprehensive overview of systems infected with malicious backdoors, hosts communicating with botnets, and vulnerabilities that can be exploited by malware just to name a few.

Along with malware and malicious processes, this report also highlights systems with vulnerabilities that are exploitable by malware. Exploitable vulnerabilities can provide attackers with a backdoor into your network to enable privilege escalation or launch malicious code.

Hosts with vulnerabilities that are exploitable by malware

Tenable.io uses both active and passive methods to detect malicious content

Tenable.io uses both active and passive methods to detect malicious content, including web traffic analysis, md5sum matching, public malware databases, and links pointing to known malware operators. Web servers hosting malicious content are also included within this report. Malicious code can be injected into website due to a cross-site scripting (XSS) or SQL injection vulnerability.

Attackers often target websites to deliver malicious payloads to a larger audience through message boards or blog posts. Malicious code often remains hidden within iframes, JavaScript code, and other embedded tags that link to third-party websites. This data can help you target and remediate issues on web servers before critical assets or services are impacted.

Botnets often use the HTTP protocol as well as encryption to evade detection by modern security solutions. Information reported by Nessus® and Nessus Network Monitor highlights active inbound and outbound communications with command and control (C&C) servers.

Hosts interacting with known botnets

Keeping your anti-virus clients updated helps to ensure your systems remain protected from malware. This report provides valuable information on the status of your anti-virus and anti-malware solutions, ensuring that they are installed and up to date. The Malware Protection chapter provides a summary of hosts running up-to-date anti-virus clients per operating system.

Anti-virus status

Tenable.io will analyze hosts with outdated anti-virus clients and provide targeted information you can use to remediate issues with anti-virus clients. Data is collected from Nessus that checks the status of various anti-virus clients across Windows, Linux, and Unix-based platforms. Using this information can also help you determine if your anti-virus client has been disabled.

Outdated anti-virus details

No organization is immune from vulnerabilities and attacks

No organization is immune from vulnerabilities and attacks. Knowing how systems are compromised can help target response efforts and minimize future damage. Tenable.io provides you with critical insight needed to measure the effectiveness of your security program, and to gain insight into your current risk posture. Using the Malicious Code Prevention report by Tenable.io provides you with targeted information to prioritize remediation efforts, close malicious entry points, and stay one step ahead of attackers and other persistent threats.

Start with Tenable.io

To learn more about Tenable.io, visit the Tenable.io area of our website. You can also sign up for a free trial of Tenable.io Vulnerability Management.


          Latest CANON Digital Photo professional (DPP) UPDATE   
Are you a CANON user and use CANON's Digital Photo Professional (DPP) to work your RAW files. Well there has been some very good improvements made with an update on December 12, 2012. If you already have older versions running on your system, including the EOS UTILITY and DPP. Well it's dead easy to update your software from the CANON site. First go to CANON here : CANON SOFTWARE UPDATES Ignore the stuff at the top of the page about the new 6D, just info, scroll down towards the bottom of the page and you will see a DRIVERS AND SOFTWARE section with a drop down menu below that. Simply choose your operating system ( Windows, MAC or Linux ), Then select your OS version, for me it's Windows vista. You then get a menu of 4 options, choose SOFTWARE. You then see: Canon RAW Codec 1.11.0 12/12/12 28.52 MB Digital Photo Professional 3.12.52 Updater 12/12/12 64.05 MB EOS Utility 2.12.3 Updater for Windows 12/12/12 77.03 MB Picture Style Editor 1.12.2 Updater for Windows12/12/12 80.90 MB ImageBrowser EX 1.1.0 for Windows 11/08/12 124.86 MB Simply choose what you want ( I updated all of them ). Choose run and then install on each item, all updates to your current software are then made automatically. I updated all, then rebooted my computer to see and explore the new features :) Enjoy ...
          Bash Shell Completing File, User and Host Names Automatically   
Bash can auto complete your filenames and command name. It can also auto complete lots of other stuff such as: => Usernames => Hostname => Variable names => Fine tuning files and names with ESC keys Match variable If the text begins with $, bash will look for a variable. For example, open terminal and … Continue reading "Bash Shell Completing File, User and Host Names Automatically"
          Akademy Qt5 QtQuick course and Nokia N9 fun   

KDE Project:

I went to the day long course on Monday given by KDAB, about QtQuick for Qt5, and it was excellent. I had used Qt4 and QML for a Symbian phone project earlier this year, and the combination worked very well.

The course started with the basics, and in most repects Qt5 QML is pretty similar to the Qt4 version. I learned a few things about the parts of QML that I hadn't tried like state machines and transitions. It got interesting at the end when Kevin Ottens explained how you could include OpenGL fragment and vertex shaders in QML. This Pimp my video: shader effects and multimedia blog on Qt Labs show some sample code and a video of what it looks like running on a phone. My personal OpenGL foo isn't up to writing my own shaders, but there is a pretty complete library of canned effects, such as drop shadows, that comes with Qt5.

At lunch time, Quim Gil announced that all the registered attendees would receive Nokia N9 phones, and when we got them it felt like Christmas had come early this year for me. It really is a very fine phone. The UI is polished with plenty of apps, and the industrial design is slick. I tried the SIM from my ancient dumb Nokia phone and unfortunately it was too big to fit. I will have to get the data transferred to another card, or someone told me that it was possible to cut down an old large SIM to make it smaller. I am certainly looking forward to try it out as a phone.

Today I've been getting the N9 working as a development device. It is quite straightforward and you just need to activate 'developer mode' via an option on the Security settings panel. Then you configure Qt Creator with SSH keys to connect via the N9's USB cable (or WLAN works too) and you're done. I got a Hello World app working, and then tried to port the large Symbian app to the N9. The port was a bit more trouble than I was expecting due to a problem with upper case characters in the app name, and other problems related to doing development under Mac OS X. I had developed the Symbian app under Linux and Windows, and I think it would be best to give up on Mac OS X for the N9 and use Linux instead.

I ssh'd onto the N9 and fished around a bit to see what was there. It has perfectly standard Debian packaging, and when you build an app in Qt Creator a .deb is transferred to /tmp and installed from there. The root partition has 2GB free to install apps which should be plenty and there was another partition with 7GB free for pictures, documents, maps and so on.

I did some work on the QSparql library that is used for interfacing Qt apps with the Tracker Nepomuk store, and it was nice to see that the lib came with the phone. But it didn't come with the driver for accessing SPARQL endpoints though, and I couldn't find that driver with an 'apt-cache search' query. I might have to build and install it myself.

I noticed that none of the apps that were pre-installed used QML, and they used the obsolete MeegoTouch libs instead. I asked about installing Qt5 on the N9 at the QtQuick course and apparently it is quite straightforward.

I can confirm what other people have said about the N9, and how great the UI and phyisical design is, with a very nice development environment. I also thought Qt/QML on Symbian works really well. So I feel puzzled by Stephen Elop's unsuccessful Windows Phone 7 strategy. If WP7 isn't taking off now, I can't see how WP8 based on the Windows NT kernel is going to be any more successful. I can't imagine why anyone would want to buy a WP7 device when Microsoft announced the other week that there would be no forward path for existing WP7 phones to run WP8.

I saw Aaron Seigo and Sebastian Kugler give a couple of great presentations about Plasma Active at the Akademy conference last weekend. The highlight was Aaron laying into so called 'Tech Pundits' who said that Plasma Active can't possibly compete with Android. Aaron pointed out that actually Android had Plasma Active had nothing much in common, and the clueless pundits were doing something like criticizing an Italian meal at a restaurant for not being a French meal. An activity based tablet is made of very different stuff to the conventional 'Bucketful of Apps' approach used by Android and iOS. By not competing with that approach head on, and having a very lean operation they can construct a viable business even with relatively small sales.

If the Nepomuk store in Plasma Active can be used as a basis for the activity based app integration then so could the similar (although less powerful) Tracker store in the N9. That is another powerful capability of the N9 that has yet to be exploited by 3rd party apps. The N9 Tracker app data integration is much more powerful than the Windows Phone 7 hubs and active tiles. Perhaps some kind of nice notification center is perhaps the main N9 feature that could be improved as far as I can see.

Oh well, I just hope that Nokia will be able to sort themselves out, and recover from the train wreck that they appear to be at present. And thanks again guys for the N9.


          Screen Locking in Fedora Gnome 3   

KDE Project:

I wanted to try out Fedora 15 with Gnome 3 running under VirtualBox on my iMac before I went to the Berlin Summit. I've already tried using Unity-2d on Ubuntu, and I thought I if I had some real experience with Gnome 3 as well, I could have a bit more of an informed discussion with our Gnome friends and others at the Summit.

Sadly it didn't go all that well. Installing the basic distro went fine, but I couldn't manage to install VirtualBox Guest tools so that 3D graphics acceleration would work. The tools built fine, but the 'vboxadd' kernel module was never installed and there was no clue why in the build log. Then while I made a first attempt at writing this blog, Virtual Box crashed my machine and I lost everything. So it looks like I'll stick with VMWare for a bit yet even though it doesn't have 3D acceralation for Linux.

I discovered that Gnome 3 locks the screen, when it goes dim, by default just like I found Kubuntu and Mandriva did recently. I had a look at where that option is defined and it was under 'Screen'. So Screen locking was under 'Screen' and I managed to guess where it was first time. Score some points for Gnome usuability vs KDE there! Even so I still don't think it is a 'Screen' thing it is a 'Security' thing. Interestingly Ubuntu doesn't lock the screen by default. Does that mean Fedora and KDE are aimed at banks, while Ubuntu is more aimed at the rest of us?

In contrast, I had spent a lot of time going round the KDE options and failing to find it how to turn off screen locking. Thanks to dipesh's comments on my recent blog about virtual machines and multi booting USB sticks he pointed out that it was under 'Power Saving', and I managed to turn it off on my Mandriva install. There were also options under power saving to disable the various notifications that had annoyed me so much like the power cable being removed. Excess notifications are a real pain and it is very important to be disiplined about when to output them in my opinion. It feels like some programmer has mastered the art of sending notifications, and they want to show that skill off to the world.

Another app that outputs heroic numbers of notifications is Quassel when it starts up. I get a bazillion notifications about every channel it has managed to join, that I really, really don't care about. I think developers need to ask the question 'if the user was given notification XXX how would they behave differently, compared to how they would have behaved if they never received it in the first place?'. For instance, I can't imagine what I would do differently if I am told the power cord is disconnected, when it was me who just pulled it out. Maybe it would be useful if you had a computer where the power cord kept randomly falling out of its socket. Or with Quassel, do I sit watching the notfications for the twenty different IRC channels that I join waiting for '#kde-devel' so I can go in immediately. In fact I can't do anything with my computer because it is jammed up with showing me notfications.

Unlike Kubuntu, Mandriva was able to suspend my laptop when the lid was shut even when the power cord was connected.

The default behaviour on both Kubuntu and Mandriva with my HP 2133 netbook when I opened the lid, was to wake up with lots of notifications that I wasn't interested in, force me to enter my password in a screen lock dialog that I didn't want, and then immediately go back to sleep. This was actually the last straw I had with Kubuntu, and I was really surprised that Mandriva 2011 was exactly the same.

I had a look at my Mac System Preferences and couldn't find any way to lock the screen. The closest equivalent was in the 'Security' group that allowed you to system to log you out after x minutes of inactivity. That option certainly isn't on by default. Macs go to sleep when you close the lid, and wake up when you open the lid without a lot of fuss or bother.

Anyhow I look forward to seeing everyone in Berlin..


          Multiple everything - using VMWare, VirtualBox and Multisystem usb drives   

KDE Project:

Recently there was an post on Hacker News about collective nouns for birds in English. I run loads of virtual machines on my computer and I wonder what they should be called - 'a herd of virtual machines'? I have the mediocre Windows 7 Home Premium, and I wonder if that should be called a 'A badling of windows' after the phrase 'A badling of ducks'.

The big change for me in my computing evironment recently has been using virtual machines all the time instead of setting up my computers with multiple boot options. For work I use a 27 inch Macintosh with 8 GB of memory, running Windows 7 Home Premium and Kubuntu 11.04 under VMWare as guests under Mac OS X as the host. We have moved from scarcity in computer resources for programmers to a world where even the most under powered netbook can handle most of the things I need to do.

I spent most of last weekend preparing my HP 2133 netbook for use at the forthcoming Desktop Summit. I wanted to get it working well in advance as I had a lot of trouble with my netbook at the recent Qt Contributors Summit. At that conference I couldn't even get WiFi working for the first day or two, and it was only because I happened to bump into the awesome Paul Sladen from Canonical that I managed to get it working to a reliable standard at all.

Talking to Paul he didn't think he was any kind of power user (he works for Canonical as a UI designer), and that his suggestions to me about how to sort out my machine should be obvious. I'm not sure about what exactly I'm good at, but I think it is only programming and I am not a very good systems administrator (or a UI designer for that matter). But if I have a lot of trouble doing obvious things with my Linux portable I think you can assume anyone at all normal with be having even more trouble.

I found out about the Multisystem project, and managed to create a USB stick with Kubuntu 11.04, Ubuntu 11.04, OpenSUSE 11.4, Mandriva 2010, Mandriva 2011 RC2, Fedora 15, Debian Squeeze 6.0.1. Then I tried running each of these distributions in turn and trying to install them onto my HP 2133.

My Great White Hope was SUSE because that was the distribution that my HP 2133 originally came with. I never got it running when I first tried to boot my new HP 2133 because I got a 'grub error 18' or similar, after i got it home and tried to boot it - that would have put off 99% of potential Linux users straight away. The install of OpenSUSE 11.4 started and then after a while it died. Oh dear.

Next up was Fedora 15, and I got as far as the initial screen after being warning that my machine wasn't powerful enough to run Gnome 3. I started the 'install to hard disk' tool, and it died after a minute or two. Not enough memory, some other problem? I've no idea.

I was beginning to run out of ideas and then I thought of Mandriva and tried to install Mandriva 2010. That went well until I tried to boot and the Mandriva grub 1.0 install clashed with the grub 2.0 install of Kubuntu that I had on the second partition in my netbook. The naming scheme for partitions has changed between grub 1.0 and grub 2.0 and it isn't a good idea to combine them.

Large parts of my weekend were beginning to disappear even though it should be simple for an expert user to set up a netbook. Then I found out that Mandriva 2011 uses grub 2.0 and installing that worked great.

Back to bird collective nouns - I clearly had a 'flight of Mandrivas' here. I like the changes that Mandriva have made to the KDE UI. They have their own custom Plasma panel. It is black and doesn't look horrible like the default grey Plasma panel does. It doesn't have multiple virtual desktops by default. I thought I should just try the default UI at the Desktop Summit and see how I got on with it.

I wonder what has gone wrong with KDE Usuability considering we have a usuability expert on the KDE eV board. I find trivial usuabiltiy problems with KDE really annoying. For instance, if I set up KDE Wallet why do I have to give it a different password once I have logged in? Why doesn't it trust me?

If I try and make my laptop suspend while it is still connected to the mains power by shutting the lid, why doesn't it just suspend? I have to disconnect the power cable and then shut the lid again. Then plug the mains cable back in to ensure the laptop doesn't run out of power. Why do I get a completely useless notification about 'your laptop has had its power removed'? Of course I know that because I just removed the power cord to get round the problem of the laptop not suspending properly.

When my laptop wakes up it asks me for a password before it unlocks the screen. Why does it lock my screen by default? I don't know. I used to work on real time trading systems at a bank, and there was certainly a policy there of using screen lockers. But banks must be about 1% of KDE's users and so I have no idea why locking the screen is something a non-expert user should be confronted with. I haven't worked out how to disable screen locking even though it is a complete pain in the arse.

I have no idea why the KDE menus still have underlines in them to allow you to navigate without a mouse. That made sense when mice were rare things in the early eighties and power users without mice could navigate through Windows 1.0, but now about 1% of people use that option why are the other 99% being exposed to such an ugly UI.

I'm looking forward to the Berlin Conference and I hope we can make it an opportuntity to sort out KDE's 1980s and 1990's UI problems like I've just described. Clearly Plasma Active is in advance of everything else in my opinion, but the normal widget based normal KDE UI isn't. I still think we can do better than Mac OS X Lion and I look forward to hearing the opinions of KDE UI experts at the conference.


          Gtk Hello World in Qt C++   

KDE Project:

Recently I've been working on the smoke-gobject bindings in the evenings and weekends. Although I'm working on other things for my job at Codethink during the day, I'm sufficiently excited about these bindings to be unable to stop spending my free time on them. This is at the expense of working on the new version 3.0 of QtRuby sadly. I'll try to explain on this blog why I think the Smoke/GObject bindings will be important for the parties attending the forthcoming Desktop Summit to consider, and why I'm giving them a higher priority than Ruby.

As the title of the blog suggests I've just got a Gtk Hello World working, written in Qt. Here is the code:


#include <QtCore/QObject>
#include <gdk.h>

class MyObject : public QObject {
    Q_OBJECT
public:
    MyObject(QObject *parent = 0);
public slots:
    void hello();
    bool deleteEvent(Gdk::Event e);
    void destroy();
};
...

#include <QtCore/qdebug.h>
#include <gtk.h>

#include "myobject.h"

MyObject::MyObject(QObject *parent) : QObject(parent)
{
}

void MyObject::hello()
{
    qDebug() << "Hello World";
}

bool MyObject::deleteEvent(Gdk::Event e)
{
    qDebug() << "delete event occurred";
    return true;
}

void MyObject::destroy()
{
    Gtk::mainQuit();
}
...

#include <gtk.h>
#include <gtk_window.h>
#include <gtk_button.h>

#include <QtCore/qobject.h>

#include "myobject.h"

int main(int argc, char *argv[])
{
    Gtk::init(argc, argv);

    MyObject obj;

    Gtk::Window *window = new Gtk::Window(Gtk::WindowTypeToplevel);
    window->setBorderWidth(10);
    QObject::connect(window, SIGNAL(deleteEvent(Gdk::Event)),
                     &obj, SLOT(deleteEvent(Gdk::Event)));
    QObject::connect(window, SIGNAL(destroy()), &obj, SLOT(destroy()));

    Gtk::Button *button = Gtk::Button::createWithLabel("Hello World");
    QObject::connect(button, SIGNAL(clicked()), &obj, SLOT(hello()));
    QObject::connect(button, SIGNAL(clicked()), window, SLOT(destroy()));

    window->slotAdd(button);
    button->slotShow();
    window->slotShow();

    Gtk::main();

    return 0;
}

You can compare the Gtk C equivalent example code in this Getting Started Gtk tutorial. I could spend lots of space explaining what it is all about, but anyone familiar with Qt programming ought to be able to understand the code without any explanation - that is the clever thing in fact!

This demonstrates that an auto-generated binding for GObject libraries described by GObject Introspection .gir files can seamlessly inter-operate with Qt libraries. And it looks much the same as a native Qt library to a Qt programmer.

Normally a language binding for Gtk that allows you to write an app to show a hello world button is a fine thing. But those normal language bindings don't mean that you have united two of the major Linux desktop communities, they just mean that you can write Gtk programs in the language of your choice. So to me, this latest binding is different in kind; it is an opportunity to integrate both libraries and communities, and it isn't limited to being able to program Gtk in your latest nifty language.

I'm going to the GObject Introspection Hackfest in Berlin, which should allow us to nail down the technical details and get close to bringing the Smoke Gobject bindings up to production quality. It will also be about the social side of linking the two communities in conjunction with the technical discussions.

The Smoke GObject apis are described with QMetaObjects, and that means that the apis will 'just work' with QML. I am looking forward to trying out Clutter programming in QML soon, as I believe that should be possible. That shows that these bindings are not just about C++ programming, but languages like QML and QtScript that are driven by QMetaObject introspection too.

You can get the latest version of the gobject-smoke bindings from the Smoke GObject Launchpad project. See the README file for an explanation of what to do with them.


          Тюним память и сетевой стек в Linux: история перевода высоконагруженных серверов на свежий дистрибутив   
image

До недавнего времени в Одноклассниках в качестве основного Linux-дистрибутива использовался частично обновлённый OpenSuSE 10.2. Однако, поддерживать его становилось всё труднее, поэтому с прошлого года мы перешли к активной миграции на CentOS 7. На подготовительном этапе перехода для CentOS были отработаны все внутренние процедуры, подготовлены конфиги и политики настройки (мы используем CFEngine). Поэтому сейчас во многих случаях миграция с одного дистрибутива на другой заключается в установке ОС через kickstart и развёртывании приложения с помощью системы деплоя нашей разработки — всё остальное осуществляется без участия человека. Так происходит во многих случаях, хотя и не во всех.

Но с самыми большими проблемами мы столкнулись при миграции серверов раздачи видео. На их решение у нас ушло полгода.
Читать дальше →
          GParted Live (ZIP-Version) 0.17.0-1   
GParted Live is the ultimate partitioning tool for Windows and Linux.
          (直接的には質問の回答になっていないかも…   

(直接的には質問の回答になっていないかもしれませんが…ご参考です)

Linux を利用しているマシンに無線クライアント機能があれば、ドライバをインストールしてあげれば無線で Linux が利用できるようになりますよ。

http://www.atmarkit.co.jp/flinux/rensai/linuxtips/761usewlan.htm...

また、ドライバが提供されていない場合には、Windows 用のドライバを Wrap して利用する仕組みを試してみるのも良いかもしれません。

http://www.eonet.ne.jp/~halt/pc_memo/ndiswrapper-debian.html


          Systems Support Analyst - Linux   

          Ksplice gives Linux users 88% of kernel updates without rebooting   

Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the […]

The post Ksplice gives Linux users 88% of kernel updates without rebooting appeared first on Geek.com.


          CenTex New Location Special - Now Offering LA - 25% Off OVZ HDD VPS Plans - From $2.91/Mo   

Company: CenTex Hosting

Why You Should Choose CenTex Hosting:
-24/7 Support
-Texas Based company
-100% Owned Equipment (We are not a reseller!)
-Enterprise Equipment
-RAID-10 Power Servers
-SolusVM Control Panel
-rDNS Available On Request
-INSTANT SETUP
-Many Linux OS Options
-Extra IPs Available For Purchase

We are happy to announce we have launched our newest location: Los Angeles.

Our new location features Asia optimized bandwidth.

Locations Available:
Dallas, TX: Click Here For Our Looking Glass
Los Angeles, CA: Click Here For Our Looking Glass

To celebrate our new location we are offering 25% off recurring on all OpenVZ HDD VPS plans by using coupon code: larelease2517

256MB Yearly Plan
256 MB RAM
25 GB HDD
1 CPU Core
500 GB Bandwidth
1 IP

Normal Price: $15/Yr
Price with Coupon Code: $11.25/Year
Coupon Code: larelease2517

Order Now


512 MB Yearly Plan
512 MB RAM
100 GB HDD
1 CPU Core
1 TB Bandwidth
1 IP

Normal Price: $25/Yr
Price with Coupon Code: $18.75/Year
Coupon Code: larelease2517

Order Now


1 GB VPS Plan
1 GB RAM
150 GB HDD
1 CPU Core
2 TB Bandwidth
1 IP

Normal Price: $3.88/Mo
Price with Coupon Code: $2.91/Mo
Coupon Code: larelease2517

Order Now


2 GB VPS Plan
2 GB RAM
200 GB HDD
2 CPU Core
3 TB Bandwidth
2 IPs

Normal Price: $6.88/Mo
Price with Coupon Code: $5.16/Mo
Coupon Code: larelease2517

Order Now

Payment Methods: Credit Card, PayPal, and Bitcoin

If you have any questions regarding this offer please feel free to contact us.


          US Dedicated: Dual Intel Xeon L5630 | 8 cores | 32 GB RAM | 20 TB Bandwidth | $48.99/month   

Hi All,

Here at WiredBlade all our servers are fully customizable such as RAM space, hard drive space, the operating system (CentOS, Windows and more and bandwidth. Our dedicated servers are ideal for all types of businesses. We believe that a good strong support team available 24/7 to provide our users with solutions is a necessity. Here's a deal that's exclusive to LET.

-CPU: Dual Intel Xeon L5630 (8 Cores, 2.13 GHz)
-RAM: 32GB
-Storage: 1TB SATA
-IPv4 Address: 1 IP
-IPv6 Address: /64 Subnet
-Monthly Bandwidth: 20 TB
-Operating System: Linux, Microsoft Windows Server 2008/2012 or self-installed
-Remote Management: Dedicated KVM over IP / IPMI
-Premium Network Blend: Level3, Zayo, Comcast, Cogent and PCCW
-Datacenter Location: Phoenix, Arizona
-Price: $48.99/month after discount | Order Now
-Upgrades available for RAM, Disk Space, IP Addresses, Bandwidth

FAQ

Q. How long does it take to provision my server?
A. New server orders are provisioned within 24 hours of payment. Q. Are the servers managed?
A. All of our services are normally self-managed. We provide the server, bandwidth, and hardware support. You can contact us for custom managed server quote.

Q. Is your bandwidth shared?
A. No! All of our servers are fully dedicated. You have access to the full amount of bandwidth 100% of the time, and you will not share a connection with any other customer or server.

Q. Do you offer KVM over IP?
A. Absolutely! All servers come with IPMI capability built in.

Q. What are your accepted user policy?
A. Warez-related, Bulk Mail, and Spam-related activities are strictly forbidden. CAN-SPAM, Copyright, DMCA, and other related US laws must be strictly followed. For more information, you can refer to our Term Of Use and Privacy Policy.

Why choose us?

-Over 15 Years of Experience
- Since 1997, we have been committed to providing innovative services along with rich features.

-24x7 On-Site Tech Support
- Zero Outsourcing In-house, around the clock support is available 24X7. We are always ready to resolve your issues.

-100% Network Uptime
- Our network is designed to be fully redundant. Downtime is not something that we will tolerate.

-Fast Server Deployment (within 24 hours)
- Your server will be well configured and ready for you to use within 24 hours of the order. We will take care of all your custom configurations.

-Security
- Your dedicated server will be hosted in our highly secured data center featuring 24x7x365 on-site security, mantraps with anti-tailgating technology, biometric and keycard access denial systems, CCTV throughout the facility interior and exterior, and data center access logs. Your data is safe with us. We protect it with our own server management expertise.

Payment Methods: PayPal & All Major Credit Cards.

All servers are located in our Tier 3+ Phoenix AZ data center. Please visit our website at www.wiredblade.com to open a ticket with our Sales department for any custom quote or any questions you may have.

You can contact us by phone, email or through the ticketing system.

Feel free to also reach out to me directly if you have any questions.

Timothy@Wired Blade


          Joomla Component com_performs component arbitary file upload   
# Exploit Title: [Joomla Com_performs component arbitary file upload]
# Google Dork: inurl:index.php?option=com_performs upload cv
# Date: [2012-09-27]
# Exploit Author: [Mormoroth]
# Vendor Homepage: [http://www.performs.org.au/]
# Version: [2.4 and prior]
# Tested on: [Linux/Windows]
------------
Attacker can upload files with uploader form
 
uploaded files go to /joomlaPath/media/uploads
 
this form builder rename uploaded file with simple combinition between date and time
 
for example if you upload file it will renamed to >> 2012-09-28-20-05-Unknown-file.txt
 
[2012-09-28] its current date and [20-05] is time of uploading file (Hour/Minute) And [Unknown] never change,after them your file name
 
by simple brute force you can find upload time which is hard part of guessing your exact uploaded file
------------
 
From Iran
 
# 619EDDF21B6569C0   1337day.com [2013-09-01]   7FA32A6DFA150769 #

          Ubuntu 9.04 (Jaunty) su Asus EeePC 901   
Con l’uscita della versione 9.04 di Ubuntu Linux, questa distribuzione ha veramente fatto passi da gigante per quanto riguarda il supporto ai netbook. Oltre a rilasciare una versione specificatamente pensata per i piccoli device (che integra di default l’interfaccia Notebook Remix), sono stati inclusi nel kernel tutti i moduli necessari a far funzionare le periferiche […]
          Dropbox: 2Gb di disco online che si integrano con Windows, OSX e Linux   
Da tempo stavo cercando un servizio che mi offrisse un piccolo spazio online, sempre accessibile ed utilizzabile come una chiavetta USB, per poterci mettere i miei dati ed averli sempre disponibili ovunque andassi. In questi giorni, dopo aver sentito parlare molto bene di Dropbox da parte di un amico, mi sono deciso a provarlo. Dropbox […]
          C, C++ Developer - Linux, SQL   

          Linux Systems Administrator - McKesson - Atlanta, GA   
In-depth knowledge of VMware ESX would be preferred as well as an operational understanding of Dell PowerEdge servers....
From McKesson - Thu, 01 Jun 2017 00:56:45 GMT - View all Atlanta, GA jobs
          GLOBALSAT GS240 NOVA ATUALIZAÇÃO V2.15 SKS 63W E 87.2W - 28/06/2017   

GLOBALSAT GS240 NOVA ATUALIZAÇÃO V2.15 SKS 63W E 87.2W - 28/06/2017




SKS 63W E 87.2W 

Gostou da postagem ? Foi ajudado ? Então ajude nosso blog a continuar crescendo e siga  nas redes sociais abaixo . Obrigado.

CADASTRE E RECEBA ATUALIZAÇÕES EM SEU EMAIL !!!:


CADASTRE E VERIFIQUE NA LIXEIRA DE SEU EMAIL !!

          Neue Software Boutique in Ubuntu MATE unterstützt Snaps   

Martin Wimpress hat auf Google Plus verlauten lassen, dass die Entwickler von Ubuntu MATE an einer neuen Software Boutique für Ubuntu MATE 17.10  Artful Aardvark geschraubt haben oder noch schrauben. Sie hoffen, dass die Neue bis zur finalen Version von Ubuntu MATE 17.10 fertig ist. Eine kleine Vorschau gibt es schon. Die Software Boutique wird von Ubuntu MATE Welcome getrennt. Sie ist ein sehr anwenderfreundlicher, aber leicht eingeschränkten Paket-Manager für Ubuntu MATE. Die Software ist sogar noch etwas mehr. Sie schlägt […]

Der Beitrag Neue Software Boutique in Ubuntu MATE unterstützt Snaps ist von Linux | Spiele | Open-Source | Server | Desktop | Cloud | Android.


          Backup oder Datensicherung eines root-Servers / vServers / VPS   

Mein root-Server bei Contabo ist abgesichert, kann in der Zwischenzeit auch Mails schreiben und meine Website läuft auch schon darauf. Da ich mich bei diesem VPS (Virtual Private Server) selbst um das Backup kümmern muss, braucht es einen Plan. Ich habe mir nun einen zurechtgelegt, der in dieser Form schon läuft und möchte diesen teilen. Einleiten möchte ich mit einem Admin-Sprichwort: Backup ist was für Feiglinge, aber wohl dem, der eins hat! Ich habe das Backup bereits sehr früh eingerichtet und […]

Der Beitrag Backup oder Datensicherung eines root-Servers / vServers / VPS ist von Linux | Spiele | Open-Source | Server | Desktop | Cloud | Android.


          Android and Chrome   
As someone who follows browser development pretty closely, a friend sent along this perfect summation of Google's open source strategy a la Andriod and Chrome by Dan Lyons (formerly known as Fake Steve Jobs): [Android is] the desktop Linux...
          Открыто бета-тестирование Dr.Web 11.0 для почтовых серверов UNX и Dr.Web 11.0.3 для Linux   

6 июня 2017 года

Компания «Доктор Веб» приглашает принять участие в бета-тестировании Dr.Web 11.0 для почтовых серверов UNIX и Антивируса Dr.Web 11.0.3 для Linux. Оба продукта были существенно переработаны и оснащены новыми функциональными возможностями, упрощающими работу с системой защиты и повышающими уровень безопасности.

Изменения в Dr.Web для почтовых серверов UNIX:

  • полностью обновлена архитектура продукта – теперь он работает с текущей инфраструктурой почтового сервера, используя стандартные интерфейсы почтовых серверов (Milter, Spamd, Rspamd), а не интегрируясь в сервер;
  • добавлена возможность прозрачного для почтового сервера и клиента встраивания непосредственно в каналы обмена данных, использующие стандартные почтовые протоколы SMTP, POP3 и IMAP (данный режим доступен только для GNU/Linux);
  • нежелательное содержимое входящих писем теперь сохраняется в архив, защищенный паролем, а не в карантин;
  • добавлен собственный веб-интерфейс управления, что избавляет от необходимости использовать Webmin;
  • реализована возможность использовать информацию из служб каталогов (OpenLDAP, Active Directory) в правилах проверки писем;
  • добавлена отправка статистики о состоянии продукта и нотификации о важных событиях (например, о блокировке писем) через SNMP;
  • для проверки писем на почтовых серверах, не имеющих возможности использования Milter/Spamd/Rspamd, доступен модуль Clamd;
  • проверка адреса отправителя/получателя письма на серверах служб DNSxL.

Изменения в Антивирусе Dr.Web для Linux:

  • добавлена проверка входящих писем по протоколам IMAP(S)/POP3(S);
  • добавлена проверка исходящих писем по протоколам SMTP(S).

Приглашаем всех заинтересованных испытать новые возможности Dr.Web 11.0 для почтовых серверов UNIX и Антивируса Dr.Web 11.0.3 для Linux.


          Comment on A Tale of Two Utilities… by mookie   
If you want the DVD ISO for CentOS, you can get it via BitTorrent. Just look in the ISO directory of choice and you'll find a .torrent file. For instance: http://centos.cs.wisc.edu/pub/mirrors/linux/centos/4.4/isos/i386/CentOS-4.4-i386-binDVD.torrent If you have a DVD-ROM drive, this is the most convenient way of installing CentOS.
          Comment on Linux? Real time? I don’t think so… by Dan Poirot   
Hey Trevor, Rats! I missed my own point! The point I wanted to make was that in his column, Phil Hochmuth had Wind River in the group pushing Linux as real time when that is not the case... For sure Linux has a place in the device space but the "real time" patches will have to come a long way before Wind River will call it real time. Thanks for the comment! - dan
          Comment on Linux? Real time? I don’t think so… by Trevor   
I think you're mistakenly assuming that "real time" always means "hard real time". Don't forget about "soft real time". These new patches may not be viable for safety-critical hard real time systems, but for soft real time, they'll probably be perfect. Think about applications like audio and video processing, VoIP servers running Linux, etc. These are domains where dropping a packet isn't the end of the world, but you still want harder guarantees and stricter, more responsive scheduling than you could get with previous Linux kernels. I wouldn't be so quick to deride the idea of Linux and real time.
          Using AT-SPI from IronPython (1)   
This adventure started when Jim Hugunin mentioned System.Windows.Automation library new in .NET 3.0.

GUI test automation and assistive technology (such as screen readers) share some common needs. While UI Automation is named after the former, AT-SPI is named after the later. AT-SPI, which stands for "Assistive Technology Service Provider Interface" -- this is the first of lengthy acronyms that will appear in this post -- is an accessibility standard for Unix/X world. Initially developed by the GNOME project, now it is also supported by Java, Mozilla, OpenOffice.org, and Qt 4.

While Microsoft-Novell interoperability agreement announced an intention to implement UI Automation on Linux (see above Wikipedia links for details), that's not available today on my Debian GNU/Linux desktop. So I looked for a way to use AT-SPI from IronPython on Mono.

First thing I did was to install at-spi package from the Debian repository. That was obvious... Less obvious was how to use it after installation, especially because I am not using GNOME desktop (I am an IceWM user). After some search, I added following two lines to my .xsession.

export GTK_MODULES=gail:atk-bridge
/usr/lib/at-spi/at-spi-registryd &


Now AT-SPI has an accessibility broker which clients talk to, and it talks CORBA. CORBA, which stands for (I warned you) "Common Object Request Broker Architecture", is like a big brother of IPC mechanisms. CORBA has been around for a long time, and while it is sometimes accused of bloat, its bloat is nothing compared to certain XML-based "Simple" Object Access Protocol.

So how does one use CORBA from Mono? A little search found a nice project named IIOP.NET, which "allows a seamless interoperation between .NET, CORBA and J2EE distributed objects." Cool. This project even has a support table for Mono on its status page! The download page mentions both binary and source release, but I couldn't find the binary release. No problem. Download the source release, unzip, and run "make -f Makefile.mono". Note that Makefile is for nmake, a Microsoft dialect of make, which is not compatible with GNU make. The build finished with no problem.

Bah, this is getting too long. Let's continue on the next post.
          Teaching IronRuby math tricks   
The first release of IronRuby brought a lot of buzz, and in my opinion rightly so. However, if you expect it to run Rails, seemlessly integrating ASP.NET widgets today, I must say you're delusional. Have you tried to run it at all? People, there are reasons why it is versioned Pre Alpha.

In the post titled "Is IronRuby mathematically challenged?", Antonio Cangiano rightfully complains of these fads. He writes:

Well, I was very interested in trying out IronRuby, but I immediately discovered that it is very crippled from a mathematical standpoint, even for a pre-alpha version. (...) However after running some simple tests, it is clear that a lot of work is required in order for this project to live up to the buzz that is being generated online about it, when you take into account that even some simple arithmetic functionalities are either flawed or missing altogether.


To be fair, the focus of this release is working method dispatch core and built-in class Array and String, as John Lam himself wrote. But it is understandable for people to worry that these problems may be difficult to remedy. Fortunately, it is not the case, as I will demonstrate below.

Remember, IronRuby is open source, so you can fix problems yourself. Can't divide two floating numbers? It turns out to be as easy as adding one-line method to FloatOps class. Big numbers don't work? Conviniently, DLR provides a high performance class to deal with arbitrary precision arithmetic, namely Microsoft.Scripting.Math.BigInteger. This is how Python long type is implemented in IronPython.

Without further ado, here's a small patch (34 lines added, 1 lines deleted) to remedy problems Antonio pointed out. I think you will be able to understand it even if you don't know C#! It's that simple.

http://sparcs.kaist.ac.kr/~tinuviel/download/IronRuby/pre-a1/patch-math

If you are using a certain operating system which lacks such a basic tool like patch, I heartily recommend you to head to GnuWin32 and get it. Add it to your PATH. Let's assume that you extracted the zip file to C:\. You need to pass --binary option to patch because of different line endings; I generated the patch on Linux.

C:\IronRuby-Pre-Alpha1>patch --binary -p1 < patch-math
patching file Src/Ruby/Builtins/Bignum.cs
patching file Src/Ruby/Builtins/FixnumOps.cs
patching file Src/Ruby/Builtins/FloatOps.cs

After that, you need to build ClassInitGenerator. This is necessary because the patch adds new methods to built-in classes.

C:\IronRuby-Pre-Alpha1>cd Utils\ClassInitGenerator
C:\IronRuby-Pre-Alpha1\Utils\ClassInitGenerator>msbuild

Now it is built, you need to run it to regenerate Initializer.Generated.cs. There is a batch file to do this, GenerateInitializers.cmd, but for some inexplicable reasons it won't work because it got the parent directories(..) one too many. It seems that they haven't tested this.

C:\IronRuby-Pre-Alpha1\Utils\ClassInitGenerator>cd ..\..
C:\IronRuby-Pre-Alpha1>Bin\Debug\ClassInitGenerator > Src\Ruby\Builtins\Initializer.Generated.cs

Now to the main build.

C:\IronRuby-Pre-Alpha1>msbuild

Let's test! Did IronRuby learn the math we taught?

C:\IronRuby-Pre-Alpha1>cd Bin\Debug
C:\IronRuby-Pre-Alpha1\Bin\Debug>rbx

IronRuby Pre-Alpha (1.0.0.0) on .NET 2.0.50727.832
Copyright (c) Microsoft Corporation. All rights reserved.
>>> 1/3.0
=> 0.333333333333333
>>> 1.0/3.0
=> 0.333333333333333
>>> 2**3
=> 8
>>> 1_000_000 * 1_000_000
=> 1000000000000
>>> exit

It did!
          CyberSecurity Engineer - Black Box Network Services - Lawrence, PA   
Experience in the administration of Windows and Linux operating systems. This position is for a senior engineer in the CyberSecurity department....
From Black Box Network Services - Wed, 07 Jun 2017 20:18:04 GMT - View all Lawrence, PA jobs
          Drivers da Impressora Samsung CLP-310N - Windows   

Drivers da Impressora Samsung CLP-310N - Windows


Samsung CLP-310N

Baixar os drivers para Samsung CLP-310N.


Os Download dos drivers para Samsung CLP-310N disponíveis em Driver Max Download estão em servidores com links diretos para o arquivo (caso não seja baixado, você pode escolher um outro servidor ou nos informar em - Pedidos). 

Talvez você encontre mais drivers que precise com nossos parceiros :

Parceiros: Giga Driver | Geek Driver | Kit Driver | BR Driver

Nosso Clube no Face: Clube dos Drivers

Seremos muito gratos se você colocar um link para o  Driver Max Download no fórum, rede social, ou em sua página de Internet. 

Peça seu driver em - Pedidos
ou encontre Aqui !


Download para  Samsung CLP-310N :


 OS Suportado :  Windows / Mac / Linux


Downloads Disponíveis:

Samsung CLP-310N

Windows 7 (64-Bit), Windows 7 (32Bit), Windows XP (64-Bit), Windows XP (32-Bit)

Samsung CLP-310N Windows Printer Driver Download (9.76 MB)


Mac OS X 10.8, Mac OS X 10.7, Mac OS X 10.6, Mac OS X 10.5

Samsung CLP-310N Mac Printer Driver Download (6.07 MB)


 Linux

Samsung CLP-310N Linux Printer Driver Download (32.2 MB)
This driver works both the Samsung CLP-310N Series.

          SIL, mono, and COM on Linux   
A developer named Mark Strobert contacted me a while back about the feasibility of using mono and it's COM Interop implementation to port Windows software written using C# and COM. I told him it should be possible, and we discussed one of the main problems being the lack of a COM infrastructure on Linux. I hadn't heard from him again for a while until...

he emailed and informed me that they were open sourcing their basic COM implementation, a.k.a. libcom. In looking over their blog/wiki, it seems as if the use of mono's COM Interop and their libcom is holding together fairly well. I won't say this is the first real world test of COM Interop in mono, as Shana has used it quite well to implement the System.Windows.Forms WebBrowser control. However, the fact that mono's COM Interop implementation is being used by others is exciting to me (if anyone is keeping their mono COM Interop use in the closet, drop me a line to let me in on the secret).

The developers are using mono to port and develop software in order to support a faith based organization that is involved in literacy, linguistics, and translation. I won't go on any further, because if you're interested you can click around their site (or Linux development site) to see better information than I could explain here.
          Three COM Interop Updates   
COM Callable Wrappers

First, COM Callable Wrapper support is now in svn. This means that managed objects can be passed to C++ and interacted with as if they were unmanaged COM objects. It still needs some more work, but most basic use of this functionality should work.

User Base Doubles

Previously, I seemed to be the only person interested in using COM Interop, especially on Linux. However, I had an exchange with someone on IRC last week who is trying to port their C#/C++ COM application from Windows to Linux. They are investigating using COM Interop instead of having to write a C wrapper layer with pinvokes.

Mozilla/XPCOM

Theoretically, the COM Interop functionality should work with Mozilla/XPCOM or any COM like system at least on Windows and Linux on x86 and x86-64 architectures (COM Interop depends on the binary layout of vtables). I always planned on trying COM Interop on XPCOM but never had the time. This weekend I took an hour or so and tried to hack something together. Since Monodevelop has an ASP.Net designer that uses Mozilla and one of the future goals is better DOM access, I decided to hack something together inside of the editor.

The good news is that it works! I can traverse/access the DOM without any glue/wrapper library. I just need to define some interfaces in C# and get a pointer to the DOM. The bad news is that strings in Mozilla/XPCOM really stink. I couldn't figure out how to marshal the strings correctly, thus making the exercise less than complete. If anyone knows what a DOMString is, please post a comment. If it ends up being a fairly simple type it can be marshalled that would be great, however if it's just a C++ class (as I currently fear) some unmanaged glue code will be needed to convert the string to something usable by the marshaller.
          COM Interop in Mono (part 1)   
The past few months I have been looking into supporting COM Interop in Mono. Not just for Microsoft COM on Windows, but hopefully XPCOM on Windows and Linux, as well as COM ported to Linux by third parties (Mainsoft, for example). I initially tried to do everything in managed code and did come up with a solution; but it was ugly and lacked some of the functionality of MS's implementation. So, I took the plunge and began looking into the mono runtime.

COM Interop is a large topic. The definitive book for COM Interop, .Net and COM: The Complete Interoperability Guide, is a solid 1500+ pages). So I'll skip any general explanation, and instead focus on the implementation in mono. COM Interop is bidirectional. It provides a mechanism to expose unmanaged COM objects to managed clients. The managed wrapper is called a runtime callable wrapper (RCW). It also provides a way to expose managed objects as COM objects to unmanaged clients. The wrapper exposed to unmanaged code is called a COM callable wrapper (CCW). My initial focus has been on the former.

Currently, I am using the same interop assemblies that MS does. An interop assembly is generated from a COM type library, and essentially converts COM type information into metadata. This interop assembly is what is referenced by managed code. This assembly contains managed interfaces that correspond to COM interfaces, as well as managed classes that correspond to COM coclasses. The methods on each interface are in the same order as they are in the COM interface. Thus, we can determine the layout of the COM interface vtable. The usage of COM objects in managed code can be divided into two cases.

The first case is when an class in the interop assembly is created in managed code. When the user creates a class that is marked with the ComImportAttribute (all classes and interfaces in the interop assembly are marked with this attribute), extra space is reserved in the class for a pointer to the unmanaged COM object.

All methods on the RCW are marked as internal calls. When the runtime tries to resolve the internal call in mono, instead of looking it up in the internal call tables, a trampoline (forgive me if I'm using this term wrong) is emitted that will call the method on the underlying COM object. That trampoline first calls a helper method and passes it the MonoMethod and the MonoObject that the call is for. The pointer to the COM object is obtained from the MonoObject (stored in the extra space reserved for the pointer). Next, the interface that the method was defined on is determined. The GuidAttribute on this interface is used to call QueryInterfaces on the COM object. The returned interface pointer is the correct 'this' pointer for the unmanaged function. Then, the offset of the method in the interface is determined. This offset is used to obtain the function pointer via the vtable of the COM object. That function pointer is called using a stdcall calling convention with the 'this' pointer pushed as the first argument on the stack. Most COM methods return an HRESULT (int) that indicates success/failure. This will be translated into a managed exception.

The second case occurs when an interface pointer is returned from a method/property, or when a RCW is cast to an interface that is not listed in the metadata. RCW's are special in that a cast can succeed to an interface other than those listed in the class's metadata. The runtime calls QueryInterface on the underlying COM object, and if that call succeeds the runtime allows the cast to occur. At this point, the runtime knows nothing of the COM object's identity except that it supports the given interface. The managed object's type becomes a generic type that wraps COM objects, System.__ComObject.

To handle this in mono, a solution was built on top of the remoting infrastructure. First, a new internal class was defined, System.ComProxy, that derives from System.Runtime.Remoting.Proxies.RealProxy. The constructor for the ComProxy class takes an IntPtr argument, the pointer to the IUnknown interface of the COM object. System.ComProxy also implements System.Runtime.Remoting.IRemotingTypeInfo, which allows for special handling of casting via the CanCastTo method. When a COM interface pointer is returned, a new instance of ComProxy is created for the type System.__ComObject. Then a call to GetTransparentProxy is made. The transparent proxy (tp) object that is returned is cast to the expected managed interface. This in turn causes a call to CanCastTo, which calls QueryInterface on the COM object to determine whether the target interface is supported. If the interface is supported, the remoting infrastructure dynamically adds the interface and its methods to the proxy's vtable. Any method calls on this interface are now handled as in the previous case.

And there it is. Too much information for the casual reader, and not enough for those of you who care. Code will hopefully follow shortly, as I have time.
          Asus X541UA-GO1345   
Asus X541UA-GO1345 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 4096MB DDR3L, 240GB SSD, 15.6" 1366x768, Intel HD Graphics, Linux, Chocolate Black
          Asus X541UA-GO1345   
Asus X541UA-GO1345 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 4096MB DDR3L, 120GB SSD, 15.6" 1366x768, Intel HD Graphics, Linux, Chocolate Black
          Asus X541UA-GO1345   
Asus X541UA-GO1345, Intel Core i3-6006U (2.0GHz, 3MB), 15.6" HD (1366X768) LED Glare, Web Cam, 4096MB DDR3L 1600MHz, 1TB HDD, Intel HD graphics 520, DVD+/-RW, 802.11n, BT 4.0, Linux, Black
          Sad experience with Debian on laptop...   

KDE Project:

Until a few weeks ago, I had Kubuntu running on my Acer Aspire 5630 laptop (as described here), and was more or less satisfied. It looked great, hardware support was satisfying, but I was missing the incremental package upgrades that I was used to on Debian (so that things break one small piece at a time, not everything at the same time when you do an upgrade). When, after upgrading to gutsy, the laptop would lock up every few minutes for a minute or so, I thought it was a Kubuntu problem and took it as the reason to setup Debian instead. BIG MISTAKE!!!!

After I had Debian installed, I realized how bad Debian's Laptop support really is:

  • KNetworkManager would not work with any WPA-encrypted WLAN networks (I can only connect to unencrypted networks); So after booting, I now need to run wpa_supplicant manually as root with the proper settings...
  • The ACPI DSDT in the BIOS is broken on this laptop, so suspend and hibernate won't work. In Kubuntu, I could simply fix the DSDT.aml and put it into the initrd, where the kernel picked it up. Infortunately, Debian developers decided not to include that patch, so I can't replace the DSDT with the fixed one in the stock kernel. The patch is also not upstream, as described on the ACPI page, because the kernel devs feel that inter alia "If Windows can handle unmodified firmware, Linux should too.". I think so too, but currently that's simply wishful thinking and does not have a bit to do with reality!!! I have yet to see one laptop where ACPI simply works out of the box in Linux! As a consequence, it seems that I will need to patch and compile the kernel myself for every new kernel upgrade (and of course also the packages for the additional kernel modules to satisfy the dependencies!)! The kernel devs again argue that "If somebody is unable to rebuild the kernel, then it is hard to argue that they have any business running modified platform firmware." Again, I agree, but just because **I AM** able to compile a kernel, does not mean that I should be forced to compile every kernel myself that I ever want to use!
  • The Debian kernel also does not include the acerhk module, which is needed to support the additional hot keys on the laptop

So, in short, I now have a laptop without properly working WLAN, no suspend and hibernate, and no support for the additional multimedia keys. Wait, what were my reasons to buy a laptop? Right, I wanted it for mobile usage, where I'm connected via WLAN, and simply open it, work two minutes and suspend it again...

I'm now starting to understand why some people say that Linux is not ready for the masses yet. If you are using Debian, it really is not ready, while with Kubuntu, all these things worked just fine out of the box (after I simply fixed the DSDT).

If having to recompile your own kernel every time it is upgraded is the price to pay for running Debian, I'm more than happy to switch back to KUbuntu again (which will cost me another weekend, which I simply don't have right now). The KUbuntu people seem to have understood that good hardware support is way more important than following strict principles (since the kernel devs don't include the dsdt patch, the Debian people also won't include it, simply because it's not in upstream... On the other hand, they are more than happy to patch KDE by self-tailored patches and cause bugs by these patches!!!).


          Инженер-программист по разработке модулей, от 70000р.   
Образование: Высшее;
Опыт работы: от 1 года;
Обязанности: Разработка драйверов и прикладного ПО для ОС Linux и для ОС Windows., в том числе с расширенеием реального времени IntervakLZtro RTX; разработка BSP и прикладного ПО для ОС vxWorks и др. ОСРВ; поддержка существующего ПО; проектирование архитектуры вновь разрабатываемого ПО.

          Firefox 55   
 Versione 55 beta in lingua italiana del browser Firefox, per sistemi operativi Windows, Linux e Mac OS X.
          JUNIOR IT ASSISTANT FOR VFX COMPANY - Goldtooth Creative Agency Inc. - Vancouver, BC   
Experience in Windows and or Linux Networking, including Active Directory, DHCP, DNS. Disaster Recovery planning, implementation and testing....
From Indeed - Tue, 27 Jun 2017 20:13:00 GMT - View all Vancouver, BC jobs
          Moving my blog   
Hi everybody, I am moving my blog to: http://blog.linuxgrrl.com Actually, I tried to do this maybe a year ago and gave up because there was some issue with the feed getting cut off in Planet. I’ve got a new version of WordPress running and after testing the feed with Planet, it seems to work fine … Continue reading
          Ideas for a cgroups UI   
On and off over the past year I’ve been working with Jason Baron on a design for a UI for system administrators to control processes’ and users’ usage of system resources on their systems via the relatively recently-developed (~2007) cgroups feature of the Linux kernel. After the excitement and the fun that is the Red … Continue reading
          Comment on Lamp, Mamp and Wamp by Andy   
Pretty good instructions for Linux. I installed on Ubuntu 14.04 and there were 2 differences from your instructions: 1. The root directory for putting the test.php file is /var/www/html/ 2. The mySQL install already asked me to set a password, so the mysql -u root command failed. I skipped this step. Thanks 28.10.2015
          Comment on How to Sell Linux to Schools by Mira   
Free knowledge like this doesn't just help, it promote demoyracc. Thank you.
          Windows 10 S не будет поддерживать Linux   
none
          Lubuntu 14.10 Utopic Unicorn Final Beta   
Testing has begun for the Final Beta (Beta 2) of Lubuntu 14.10 Utopic Unicorn. Head on over to the ISO tracker to download images, view testcases, and report results. If you’re new to testing, anyone can join and they don’t have to be Linux Jedis or anything. You can find all the information you need […]
          Adobe Premiere Pro CC - Das umfassende Training   
- Software / Fotobearbeitung

Satte 13 Stunden Videotraining gibt Trainer Jörg Jovy auf seinen beiden DVDs "Adobe Premiere Pro CC - Das umfassende Training (PC+MAC+Linux)". Schritt für Schritt wird darin erläutert, wie man mit dem Videobearbeitungs- und Schnittprogramm beeindruckende Filme erzeugen kann. Darin fließt die 25-jährige Berufserfahrung von Jovy mit ein, der so leicht verständlich die künstlerischen und die technischen Aspekte des Videoschnitts didaktisch gut erläutert.

Wer einen einfachen und bequemen Weg sucht, sich in die vielen Funktionen von Adobe Premiere Pro CC einzuarbeiten, sollte auf den Lehrgang "Adobe Premiere Pro CC - Das umfassende Training (PC+MAC+Linux)" von Jörg Jovy zurückgreifen. Dank der übersichtlichen Gestaltung der Bedienoberfläche dieses Trainings können die einzelnen Kurse schnell erreicht werden.

Das Erlernte kann sofort ausprobiert werden und es besteht zudem die Möglichkeit, gleich mit dem Trainer mitzuarbeiten. Hier findet der Autor die richtige Balance zwischen einer ausführlichen Einführung und wertvollen Tipps für die tägliche Praxis. Zur noch besseren Festigung werden oft Dinge mehrfach erklärt, so dass nicht ständig zurückgeblättert werden muss.

Während der 13 Stunden Spielzeit, die durchaus etwas brauchen, bis sie komplett durchgearbeitet sind, gibt es ausführliche Erläuterungen zur Bedienoberfläche, Erklärungen zu den einzelnen Bearbeitungsfenstern und Tipps zum Vorbereiten von Schnittprojekten. Dann wird natürlich der Schnitt von Bild und Ton erläutert und der Autor geht auf verschiedene Effekte ein und erklärt die Gestaltung von Titeln und Bauchbinden. Über Colorgrading und Audio-Optimierung geht es dann bis hin zum Rendern des fertigen Films, so dass man schnell sein eigenes Video präsentieren kann.

Fazit: Dieses Training ist für Einsteiger in die Videobearbeitung sehr empfehlenswert, da hier didaktisch gut aufbereitet auf die Grundlagen eingegangen wird und alles durch nützliche Kniffe für die Praxis ergänzt wird.

Jetzt bei Amazon bestellen!
 
Verwandte Themen:
Adobe After Effects CC- Das umfassende Training (Software) , 16. 2013 - 0 Kommentare
Adobe Premiere Pro CS6 - Das umfassende Training (Software) , 26. 2012 - 0 Kommentare
Adobe Dreamweaver CS3: Das umfassende Training (Software) , 17. 2007 - 0 Kommentare
Das Grafikpaket für Adobe Photoshop CS3 (Software) , 29. 2007 - 0 Kommentare
Creative Suite 5.5 – kostenlose Online-Trainings von Gali... (Kreativ-News) , 14. 2011 - 0 Kommentare


          Lamp, Mamp and Wamp   
LAMP is an acronym of Linux Apache, MySQL and PHP. MAMP is an acronym of Mac Apache, MySQL and PHP. And as expected WAMP is an acronym of Windows Apache, MySQL and PHP. They are a download which packages together Apache, MySQL and PHP and allow you to build and host websites locally. It is […]
          How to Make Awesome Wallpapers in GIMP   
GIMP is my image editor/creator of choice on both Windows and Linux however I prefer Photoshop on Mac. If you have any experience of Photoshop you will know how to use these tools, brushes and effects in it. GIMP as I have reviewed before in both ‘5 Lightweight Alternatives to Popular Applications‘ and in ‘Top […]
          Some Awesome Gaming   
Now Mac and Linux are often criticized for the lack of good games available under the two platforms compared to Windows. Perhaps Mac is moving in the right direction towards native big budget games however, Linux is not. We may have WINE ,which is something I will do a feature on, but it doesn't replace the real game.
          Top 5 Free Windows Applications   
Here is my view on what are the top 10 free Windows Apps are. All the apps websites are linked in the name. Also I have left out software such as browsers as I will be doing a later focus on that.
          Fake News From Fossbytes and Techworm   

          Scientific Linux 7.3 Released   
  • Scientific Linux 7.3 Officially Released, Based on Red Hat Enterprise Linux 7.3

    After two Release Candidate (RC) development builds, the final version of the Scientific Linux 7.3 operating system arrived today, January 26, 2017, as announced by developer Pat Riehecky.

    Derived from the freely distributed sources of the commercial Red Hat Enterprise Linux 7.3 operating system, Scientific Linux 7.3 includes many updated components and all the GNU/Linux/Open Source technologies from the upstream release.

    Of course, all of Red Hat Enterprise Linux's specific packages have been removed from Scientific Linux, which now supports Scientific Linux Contexts, allowing users to create local customization for their computing needs much more efficiently than before.

  • Scientific Linux 7.3 Released

    For users of Scientific Linux, the 7.3 release is now available based off Red Hat Enterprise Linux 7.3.


          Scientific Linux 6.7 Officially Released, Based on Red Hat Enterprise Linux 6.7   

The Scientific Linux team, through Pat Riehecky, has had the great pleasure of announcing the release and immediate availability for download of the Scientific Linux 6.7 computer operating system.

Read more


          Carl Sagan's solar-powered spacecraft is in trouble   
  • Carl Sagan's solar-powered spacecraft is in trouble
  • Software Glitch Pauses LightSail Test Mission

    But inside the spacecraft's Linux-based flight software, a problem was brewing. Every 15 seconds, LightSail transmits a telemetry beacon packet. The software controlling the main system board writes corresponding information to a file called beacon.csv. If you’re not familiar with CSV files, you can think of them as simplified spreadsheets—in fact, most can be opened with Microsoft Excel.


          Browser - Meglio Chrome o Mozilla??..leggete e analizzate le news.... (AntoninoTonyMalavenda)   
AntoninoTonyMalavenda scrive nella categoria Browser che: Il motore di rendering che potrebbe scriver il futuro di Firefox, o di un browser completamente nuovo, sarà compatibile con Linux, Android, OS X e Firefox OS. La più grande novità di Servo sarà il fa
vai agli ultimi aggiornamenti su: internet chrome mozilla google
1 Voti

Vai all'articolo completo » .Meglio Chrome o Mozilla??..leggete e analizzate le news.....
          Docker Swarm e constraint in un mondo reale   

Continuo dal post precedente e da questo. Rileggo e cerco di fare autocritica. GRANDI parole avevo usato per descrivere quanto offerto da Docker sia con che senza Swarm. Gli elogi si sprecavano descrivendo con esempi funzionanti quanto fosse possibile. In effetti, come si può non dubitare della bontà di Docker (Swarm) quando ti accorgi che puoi distribuire servizi e altro con semplici comandi da terminale? Come si può non rimanere a bocca aperta(?!), durante le demo, nel vedere come i servizi vengono installati ed eseguiti e come essi, una volta che un host per causa accidentali esce dalla rete, si prenda carico e in piena autonomia riprenda il servizio perso e lo installi su un altro host?

Tutto questo è bello solo se si vive nel mondo delle demo, il mondo reale è un'altra cosa. Sia nel mondo del cloud che quello reale di una semplice web farm, ci sono macchine predisposte a questo o quel servizio (prestazioni CPU, capacità di memoria, capienza e tipo dei dischi). Nulla da eccepire contro la scalabilità di Docker Swarm, ma se ci sono due macchine che fanno da web server e una delle tue dev'essere messa offline per manutenzione, perché Docker si deve permettere di installare lo stesso servizio sulla macchina dove già gira l'altra istanza del web server? O, peggio, perché dovrebbe installare il web server su una macchina su cui vogliamo che giri un database? - Nelle configurazioni reali delle macchine devo fare in modo che alcune, dove girano servizi delicati, come il database, non siano raggiungibile direttamente dall'esterno e altre restrizioni simili.

In questo post voglio proprio trattare questi punti e come si può configurare con una migliore personalizzazione Docker Swarm e, perché no, mettere in luce altre problematiche. Ripartiamo dall'inizio: normalmente si hanno necessità di prestazione e permessi tra le varie macchine dipendentemente dal loro utilizzo. Come scritto sopra, una macchina per il database facilmente avrà bisogno di capienti dischi e restrizioni al limite del paranoico per l'accesso dalla rete; la macchina che esporrà servizi web in internet (un reverse proxy come NGINX), non avrà di certo necessità di dischi capienti, e il minimo di permessi per poter poi distribuire i servizi web presenti nelle altre macchine della rete, e così via... Come si ottiene tutto questo con Docker (Swarm)?

Innanzitutto si possono definire delle label a livello di singola macchina (host). Questo ci permette di filtrare l'assegnazione dei container a macchine predisposte. In questo mio esempio farò in modo di avere più host a cui assegnerò diversi servizi:

  • Nginx
  • Due istante per delle web.api (le stessi viste nei post precedenti)
  • Un ulteriore servizio che sarà utilizzato dalle web api precedenti per simulare una chiamata interna

Innanzitutto ho scritto una semplice web api che ritorna la data e l'ora attuale nel fuso orario UTC. Il codice è disponibile qui. Il tutto si riduce ad un singolo controller:

using System; using Microsoft.AspNetCore.Mvc; namespace MVC5ForLinuxTest2.Controllers { [Route("api/[controller]")] public class DatetimeUTCController : Controller { // GET: api/values [HttpGet] public string Get() { return DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss"); } } }

Richiamandolo direttamente con http://localhost:5001/datetimeutc, la risposta sarà:

2016-12-10 12:26:29

Come scritto sopra, questa API simulerà una richiesta interna (non volevo incasinare gli esempi con un database reale), e all'API vista più volte (codice sorgente qui), ho aggiunto un controller SystemInfoUTCController che richiede il DateTime all'API prima vista. Codice per questo controller:

public class SystemInfoUTCController : Controller { private readonly ISystemInfo _systemInfo; private readonly IHttpHelper _httpHelper; private readonly AppSettings _appSettings; public SystemInfoUTCController(ISystemInfo systemInfo, IHttpHelper httpHelper, IOptions appSettings) { _systemInfo = systemInfo; _httpHelper = httpHelper; _appSettings = appSettings.Value; } [HttpGet] public async Task Get() { DateTime datetimeValue = new DateTime(1970, 1, 1); XElement value = await _httpHelper.GetHttpApi(_appSettings.DateTimeUrl); var content = value.XPathSelectElement("."); if (content != null && !string.IsNullOrEmpty(content.Value)) { datetimeValue = DateTime.Parse(content.Value); } var obj = new DTOSystemInfoUTC(); obj.Guid = _systemInfo.Guid; obj.DateTimeUTC = datetimeValue; return new DTOSystemInfoUTC[] { obj }; } }

Questa API usa una classe esterna con l'interfaccia IHttpHelper, il cui codice è:

public class HttpHelper : IHttpHelper { public async Task GetHttpApi(string url) { using (var client = new HttpClient()) { try { client.BaseAddress = new Uri(url); var response = await client.GetAsync(""); response.EnsureSuccessStatusCode(); // Throw in not success var stringResponse = await response.Content.ReadAsStringAsync(); var result = new XElement("Result", stringResponse); return result; } catch (HttpRequestException) { return new XElement("Error"); } } } }

Ora, richiamando questa API:

http://localhost:5000/api/systeminfoutc

Avremo come risposta:

[{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"1970-01-01T00:00:00"}] [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"2016-12-10T12:45:21"}]

1970-01-01 nel caso la seconda API datetimeutc non fosse attiva, o con la data se è tutto funzionante. Ottimo, ora basta creare come al solito le immagini per Docker (nel codice sorgente c'è il Dockerfile per la loro creazione, o si possono utilizzare quelle pubbliche che ho creato per questi esempi.

E' arrivato il momento di specificare quali macchine Docker Swarm dovrà utilizzare per i vari servizi. Innanzitutto è necessario creare una rete specifica per Docker Swarm:

docker network create --driver overlay mynet

Controllo che sia tutto corretto:

# docker network ls NETWORK ID NAME DRIVER SCOPE 0f1edcc32683 bridge bridge local 23e41e7b27e5 docker_gwbridge bridge local ff4d514a96e8 host host local d10zid5t65cb ingress overlay swarm 0x9abv9uqtgh mynet overlay swarm 46cd3ea3cb27 none null local

E arrivato il momento della configurazione degli host. Per questi esempi ho creato quattro macchine virtuali:

  • 192.168.0.15 osboxes1 web=true
  • 192.168.0.16 osboxes2 db=true
  • 192.168.0.17 osboxes3 web=true
  • 192.168.0.18 osboxes4 nginx=true

15 e 17 saranno per la web api systeiminfo e systeminfoutc, 16 per datetimeutc; la 18 la tratterò a breve. Per specificare quelle label ci sono vari modi in Docker, quello per me più comodo è modificando il file del servizio di Docker. Il file /lib/systemd/system/docker.service:

[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// ExecReload=/bin/kill -s HUP $MAINPID ...

E' sufficiente modificare la riga con ExecStart:

[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// --label web=true ExecReload=/bin/kill -s HUP $MAINPID ...

Aggiunta la giusta dichiarazione delle label per ogni macchina, e riavviato i servizi di Docker, ora possiamo controllare che tutto funzioni con dei semplici comandi da terminale:

# docker node ls -f "label=web" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 3ta2im9vlfgrbmsyupgdyvljl osboxes3 Ready Active 83f6hk7nraat4ikews3tm9dgm * osboxes1 Ready Active Leader # docker node ls -f "label=db" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 897zy6vpbxzrvaif7sfq2rhe0 osboxes2 Ready Active # docker node ls -f "label=nginx" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 002iev7q6mgdor0zbo897noay osboxes4 Ready Active

Perfetto, ho il risultato che volevo. Avendo creato le immagini di Docker, ora posso iniziare la loro installazione sulle macchine che voglio io:

docker service create --replicas 1 --constraint engine.labels.db==true --name app1 -p 5001:5001 --network mynet sbraer/aspnetcorelinux:api2

Notare il parametro constraint e replicas. Se tutto è andato a buon fine:

# docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 0x6nbwrahtd1x7x31exal4sb8 app1.1 sbraer/aspnetcorelinux:api2 osboxes2 Running Starting 7 seconds ago

Ottimo, il servizio è attivo e funzionante sulla macchina predisposta.

# docker service ls ID NAME REPLICAS IMAGE COMMAND 7studfb313f7 app1 1/1 sbraer/aspnetcorelinux:api2

E con questo comando effettivamente vedo che è stata avviata solo una istanza di questa web api (ho usato il parametro --replicas 1). Questo ci riporta però al problema menzionato all'inizio di questo post. Nel caso della web api principale che dev'essere istallata su due macchine, che cosa succede se una delle due viene spenta? Docker Swarm installerò una sua copia sulla macchina disponibile (su nessun'altra che non abbia la stessa definizione della label e di constraint). Proviamo ad installarle:

docker service create --replicas 2 --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

E se non sapessimo quante macchine abbiamo a disposizione per un servizio?

docker service create --replicas $(docker node ls -f "label=web" -q | wc -l) --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

Questo facilita un po' le cose ma non risolve il problema principale. Per risolvere definitivamente il problema è sufficiente spulciare tra i parametri di Docker Swarm e trovare --mode global. Questo parametro installerà il container di Docker su tutte le macchine disponibile nella rete di Docker Swarm, ma con la clausola constraint lo farà solo sulle macchine predisposte:

docker service create --mode global --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

In questo modo, Docker Swarm installerà una sola istanza di container per macchina e avremo anche due utili conseguenze: la prima è che se si spegne una macchina non istallerà doppioni inutili, la seconda, ben più importante e utile, è che, per esigenze di carico o altro, ci sarà sufficiente inserire in rete altre macchine con la label di configurazione che vogliamo, perché Docker Swarm istalli altri container del tutto indipendentemente. Ottima cosa!

Ecco cosa è accaduto con il comando precedente:

# docker service ps app0 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 9d0a7oms384cbli78a0vuwwre app0.1 sbraer/aspnetcorelinux:api1 osboxes1 Running Starting 36 seconds ago 04y5c35orviamoogaetedjh1i app0.2 sbraer/aspnetcorelinux:api1 osboxes3 Running Starting 35 seconds ago

Ora ho avviato tutti i servizi principali, controllo che tutto funzioni su tutte le macchine:

# curl localhost:5000/api/systeminfo [{"guid":"883bc3f9-f636-45f6-a05b-f91a09f95b13","dateTime":"2016-12-10T12:30:33.797293+00:00"}] # curl localhost:5000/api/systeminfoutc [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"2016-12-10T12:31:08"}]

Infine due parole sull'accessibilità dei servizi in Docker. Ci sono queste possibili casistiche:

  • Esterno a servizio Docker
  • Docker a servizio esterno
  • Docker a Docker

Il primo caso è quello usato finora: da un browser o da terminale richiamo un servizio esposto all'interno di un container di Docker; in questo caso è necessario che, quando è avviato, Docker esponga le porte di nostro interesse (5000 nel caso della API principale di questi esempi, 5001 quella per il DateTime UTC) e per richiamarlo è sufficiente usare l'IP di una qualsiasi macchina (se siamo in Docker Swarm e il container è avviato come servizio). Il secondo caso non è preso in considerazione nei miei esempi perché è il più semplice e non comporta alcun problema: se la nostra API avesse avuto bisogno di un database come SQL Server installato su un server esterno, la stringa di connessione sarebbe la classica e non ci sarebbero stati problemi. L'ultimo caso è il più complesso da comprendere all'inizio; le regole sono come quelle del primo caso ma l'utilizzo dell'IP comporterebbe dei problemi perché ogni container vive come se fosse in una macchina a sé; la soluzione più semplice è usare il suo name in modo che, nel caso sia un servizio distribuito via swarm non dovremo preoccuparci di controllare quale e se quella macchina con quel servizio è attiva. Nel file di configurazione della web API systeminfoutc ho definito l'URL della API da richiamare:

"AppSettings": { "DateTimeUrl": "http://app1:5001/api/DatetimeUTC" }

Ogni servizio avviato in Docker Swarm sarà visibile agli altri Container in esecuzione; per un controllo più granulare nulla ci vieta di create più reti in Docker con alcuni punti di accesso e condivisioni. In questo caso - lo ammetto - non ho trovato personalmente vantaggi dalle semplici prove fatte. Va be', taglio corto, siamo arrivati al punto che dovremo in qualche modo esporre la web api, e solo lei, su internet. E qui entra in tutto questo la macchina su cui installeremo NGINX. Per chi non lo conoscesse è un web server/reverse proxy molto conosciuto e utilizzato sul web per le sue prestazioni. La sua configurazione è semplice. Nel mio caso, per esporre le due API principali (che rispondono alla porta 5000) userò questo file di configurazione (dotnet.conf):

worker_processes 1; events { worker_connections 1024; } http { upstream web-app { server app0:5000; } server { listen 80; location / { proxy_pass http://web-app; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } }

Notare le definizione del servizio che risponde alla porta 5000: app0. Questo porta subito a capire che eseguirò NGINX da Docker. Inoltre creerò un container apposito con questo Dockerfile:

FROM nginx COPY dotnet.conf /etc/nginx/nginx.conf EXPOSE 80

Creata l'immagine ora potrò avviarla:

docker service create --mode global --network mynet -p 80:80 --name nginx --constraint engine.labels.nginx==true sbraer/nginx

Se tutto funziona, ora potrò richiamare l'API con:

# curl localhost/api/systeminfo [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTime":"2016-12-10T12:37:54.555623+00:00"}] # curl localhost/api/systeminfoutc [{"guid":"883bc3f9-f636-45f6-a05b-f91a09f95b13","dateTimeUTC":"2016-12-10T12:38:01"}]

Nel mondo reale ora si dovrebbe blindare la rete in modo che non sia accessibile dall'esterno se non per la porta 80 della macchina su cui gira NGINX e rifare la prova:

# curl http://192.168.0.18/api/systeminfo [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTime":"2016-12-10T12:39:52.582317+00:00"}]

Per prova, stoppiamo il servizio interno:

docker service rm app1

Nuovo test:

#curl http://192.168.0.18/api/systeminfoutc [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"1970-01-01T00:00:00"}]

Come scritto prima per le modalità di accesso ai vari servizi da/a Docker, se avessimo voluto installare NGINX direttamente su una macchina, il file di configurazione avrebbe dovuto puntare direttamente a tutti gli IP che espongono la web API:

upstream web-app { server 192.168.0.15:5000; server 192.168.0.17:5000; }

In questo caso aggiunte di altre macchine destinate a questo servizio comporterebbe la modifica manuale di questo file, cosa che non accadrebbe nel caso precedente.

Arrivato a questo punto vediamo alcuni punti importanti: innanzitutto, con la versione attuale di Docker Swarm (1.12 e 1.13) non è possibile distribuire immagini create in locale; è obbligatorio che le immagini sia prese da un hub ufficiale o meno (nei miei esempi ho caricato tutte le immagini di Docker nell'hub ufficiale). Per chi non volesse rendere pubbliche le proprie creazioni (per qualsiasi motivo) è possibile utilizzare hub che permettono il caricamento anche di immagini protette, oppure, più semplice, è possibile crearsi un registry server in casa senza problemi (qui la documentazione). Infine... che dire? Eh lo so, non ho ancora menzionato i problemi... Il problema più fastidioso? Quando si distribuisce su più macchine le istanze di un servizio, non è possibile avere dettagli sull'IP o un modo diretto per raggiungere una determinata istanza di quel servizio. Sembra da poco, ma è un problema grave perché il sistema fin qui descritto funziona alla perfezione su servizi che possono essere scalati indipendentemente l'uno dall'altro ma non per quei servizi che devono essere configurati e collegati. Che cosa voglio dire? Se volessi distribuire, come ho fatto per l'esempio qui sopra, un database su più macchine con Docker Swarm, come mi dovrei comportare se poi le volessi configurare per un cluster con replica?

Confesso di aver fatto molte prove in merito, ma con la versione attuale di Docker Swarm la soluzione automatica non esiste - o quel database o altro servizio è predisposto o ci sarà poco da fare. Se voglio forzatamente usare Docker Swarm, per ogni macchina in cluster per quel database dovrò creare immagini mirate con file di configurazione differenziato. La cosa funziona, ma dagli applicativi che necessitano l'accesso ci si dovrò accertare di avere stringhe di connessione che permettano la definizione di più server. La soluzione più umana che ho trovato, è usare singole istanze di Docker collegate come visto qui con Consul (ed ecco spiegato il motivo di quel post).

Ed è finito pure il 2016.

Tags:

Continua a leggere Docker Swarm e constraint in un mondo reale.


(C) 2017 ASPItalia.com Network - All rights reserved


          Docker + Consul + Registrator   

Stavo scrivendo qualche nota aggiuntiva su Docker Swarm quando mi sono deciso di scrivere un post in più piuttosto che incasinare con troppa roba uno solo – già non sono capace di scrivere, se metto poi troppa roba non si capisce veramente nulla…

In un post precedente avevo scritto i vantaggi di Consul nel discovery dei service. Ripropongo velocemente i vari passaggi per installarlo in una distribuzione Linux – per Windows si veda il post precedente, e anche se sono passati mesi ormai, non ho ancora investigato sul punto di integrazione tra Consul e i DNS del sistema operativo.

Innanzitutto, per fare in modo che Consul possa modificare il DNS della macchina, è necessario installare un servizio di DNS, nel mio caso ho scelto la semplicità di Dnsmasq. Sulle Debian e derivate è sufficiente:

apt-get install dnsmasq

Fatto questo ed essersi assicurati che il servizio è attivo, si deve aggiungere una linea in un file di configurazione in modo che possa venire usato da Consul. Nella directory /etc/dnsmasq si deve creare un file dal nome 10-consul con questo contenuto:

# Enable forward lookup of the 'consul' domain:
server=/consul/127.0.0.1#8600

Ora è il momento di Consul. Scaricato dai repositori o dal sito, vediamo cosa offre di utile con Docker. Innanzitutto nei test che descriverò qui ho utilizzato due macchine virtuali con questi IP:

  • 192.168.56.102
  • 192.168.56.101

Se sulle macchine non ci fosse la directory /etc/consul.d, consiglio di crearla perché può essere utile in caso si debbano creare configurazioni. Sulla macchina 102, avvio Consul:

consul agent -server -bootstrap-expect 1 \
-data-dir /tmp/consul -node=agent-one \
-bind=192.168.56.102 -config-dir /etc/consul.d

Quindi sulla macchina 101:

consul agent -data-dir /tmp/consul \
-node=agent-two \
-bind=192.168.56.101 -config-dir /etc/consul.d

Infine dalla macchina 102, collego le due istanze di Consul:

consul join 192.168.56.101

Se tutto è andato bene:

# consul members
Node Address Status Type Build Protocol DC
agent-one 192.168.56.102:8301 alive server 0.6.4 2 dc1
agent-two 192.168.56.101:8301 alive client 0.6.4 2 dc1

Ok, e ora siamo allo stesso punto del post precedente. Ora è ora di tirare in mezzo Docker. Esiste un servizio esterno che è in grado di aggiornare direttamente Consul con tutti i nuovi container creati in Docker e tutto automaticamente. Questo servizio di chiama Registrator. Per utilizzarlo basta lanciarlo sulle macchine interessate come container in Docker. Sulla macchina 102:

docker run –d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://localhost:8500

Ora riprendiamo in mano la semplice API che avevo creato, e su questa macchina l'avvio:

docker run -it --name app1 -p 5000:5000 sbraer/aspnetcorelinux

Controllo che funzioni:

#curl localhost:5000/api/systeminfo
[{"guid":"a382678c-e38a-445c-831c-582f775c54f7"}]

Automaticamente questa API sarà inserita tra i servizi di Consul.

#curl localhost:8500/v1/catalog/services
{"aspnetcorelinux":[],"consul":[]}

Ed ecco la nostra API – aspnetcorelinux – presente. Qui si può vedere che non viene utilizzato il nome che avevo dato quando avevo creato il container, ma il nome stessa dell'immagine utilizzata. Ora controllo che funzioni con i DNS di Consul:

#curl aspnetcorelinux.service.consul:5000/api/systeminfo
[{"guid":"a382678c-e38a-445c-831c-582f775c54f7"}]

Ecco che funziona. E sulla seconda macchina, la 101? Vediamo se Consul ha aggiornato anche su quella macchina la lista dei servizi:

#curl localhost:8500/v1/catalog/services
{"aspnetcorelinux":[],"consul":[]}

Il servizio c'è, ma funziona?

#curl aspnetcorelinux.service.consul:5000/api/systeminfo
[{"guid":"a382678c-e38a-445c-831c-582f775c54f7"}]

Perfetto. Sempre in riferimento a quel mio post, avevo scritto anche della capacità di Consul di fare load balancing delle richieste. Per controllare questo, installo lo stesso servizio sulla seconda macchina:

docker run -it --name app1 -p 5000:5000 sbraer/aspnetcorelinux

Vediamo se funziona:

curl aspnetcorelinux.service.consul:5000/api/systeminfo
[{"guid":"a382678c-e38a-445c-831c-582f775c54f7"}]

Il comando visto prima non dà differenze:

#curl localhost:8500/v1/catalog/services
{"aspnetcorelinux":[],"consul":[]}

Però possiamo avere più dettagli specifici con:

#curl localhost:8500/v1/catalog/service/aspnetcorelinux

Ed ecco il risultato:

Capture2

L'api è disponibile su entrambe le macchine e farà load balancing in caso di chiamata da un terzo pc – Consul rimanda sempre alla stessa macchina se il servizio è presente.

Dimenticavo, se qualcuno si stesse chiedendo se Registrator funziona anche con Docker Swarm, la risposta è no. Leggendo varie discussioni a riguardo, sembra che Docker Swarm non esponga eventi che possono essere intercettati da servizi esterni, quindi Registrator non è in grado di interecettare la creazione e la rimozione dei servizi Swarm. In futuro? Mah...

Tags:

Continua a leggere Docker + Consul + Registrator.


(C) 2017 ASPItalia.com Network - All rights reserved


          Asp.net Core, Docker e Docker Swarm   

A me sembra passata un'eternità. Intorno al 2004 si poteva utilizzare codice scritto per il Framework.Net su Linux con Mono. Da appassionato da moltissimi anni anche del mondo linux, la cosa mi era sembrata fin da subito interessante (alcuni punti li avevo trattati anche su questo mio blog) e mai avrei credito che di punto in bianco la stessa Microsoft si decidesse un giorno di supportare pienamente un suo diretto concorrente quando iniziò lo sviluppo di asp.net Core.

Lo sviluppo di applicazioni web su linux è ora più facile che mai e in rete si trovano parecchie guide che mostrano quanto sia alla portata del developer medio (ricordo che anche con mono e linux era possibile sviluppare applicazioni web con asp.net, ma personalmente trovavo la cosa come una dimostrazione delle potenzialità di mono e nulla da utilizzare realmente in produzione: mai mi sarei sognato di sviluppare web app in asp.net su linux). Un'ottima guida, in fase di miglioramento, riguardante la configurazione di asp-net core su Linux, la si può trovare in questo post di Pietro Libro.

Quest'apertura di Microsoft verso altri mondi non si è fermata solo a Linux, ma anche al mondo Apple, e, non accontentandosi, non si è fermata dinanzi neanche a novità come il mondo dei container dove Docker fa ormai da padrone, e ufficialmente rilascia immagini anche per questo mondo. Il suo uso è semplice, avviato Linux possiamo prendere una nostra soluzione appena realizzata con Visual Studio ed avviarla senza installare alcunché (se non lo stesso Docker, naturalmente). Esempio, scaricato in una directory il progetto, basta avviare docker in questo modo:

docker run -it --name azdotnet -v /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2:/app -p 5000:5000 microsoft/dotnet:1.0.1-sdk-projectjson

Scaricate le immagini indispensabili all'avvio, viene restituito il prompt all'interno del container di docker, dove possiamo scrivere:

dotnet restore dotnet run

Ecco che vengono scaricate le dipendenze e viene compilato il progetto. Alla fine:

Project MVC5ForLinuxTest (.NETCoreApp,Version=v1.0) will be compiled because the version or bitness of the CLI changed since the last build Compiling MVC5ForLinuxTest for .NETCoreApp,Version=v1.0 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:02.8545851 Hosting environment: Production Content root path: /dotnet2/src/MVC5ForLinuxTest Now listening on: http://0.0.0.0:5000 Application started. Press Ctrl+C to shut down.

Dal browser:

Prima di proseguire due parole sui parametri utilizzati con docker. Il parametro -it serve per connettere il terminale attuale al contenuto del container. Avremmo potuto anche utilizzare:

docker run -d ...

Dove d sta per detach, in questo modo non avremo visto nulla sul terminale e saremmo rimasti nella shell di linux attuale. Di contro non avremmo visto l'output della compilazione e non avremmo potuto inviare i comandi di compilazione immediatamente. E' sempre possibile riconnettersi ad un container avviato in detach mode per controllare cosa sta succedendo, per esempio:

docker logs azdotnet

Questo comando visualizza il contenuto del terminale all'interno del container (aggiungendo il parametro -f il comando non ritornerebbe al prompt ma continuerebbe a rimanere in attesa di nuovi messaggi). Infine ci saremmo potuti riconnettere con il comando:

docker attach azdotnet

Il parametro -v serve per gestire i volumi all'interno dei container di docker. Riassumendo, in docker si possono definire due tipi di volumi, il primo, quello più semplice e utilizzato nel mio esempio, permette di creare un collegamento all'interno del container con il disco del compute host che ospita la sessione di docker. Nel mio esempio ci sono due path divisi da ":", a sinistra c'è la prima parte del path sul disco dell'host, a destra il path all'interno del container. Nell'esempio: /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2 è il percorso nell'host che sarà mappato come app all'interno di docker. Solo per completezza, il secondo tipo di volume è quello gestito internamente da docker: eventuali volumi montati in questo modo saranno gestiti in un suo path privato all'interno di docker; quest'ultimo tipo è comodo per condividere tra più container directory e file inseriti in altri container.

Il parametro -p viene utilizzato per definire quali porte docker deve aprire nel container verso l'host; così come il parametro per i volumi, anche questo accetta due valori suddivisi dal caratteri dei due punti in cui il valore a destra definisce quale porta sarà mappata nel container e a quale sarà mappata nell'host (parte sinistra del parametro).

Infine dobbiamo specificare il nome dell'immagine che dobbiamo utilizzare, essendo questa salvata nell'hub ufficiale di docker, possiamo definirla semplicemente con microsoft/dotnet; se fosse su un altro servizio di immagini di docker avremmo dovuto scrivere il path completo di dominio: miohost.com/docker/hub/microsoft/dotnet. Il parametro dopo i due punti è il tag che specifica quale versione vogiamo utilizzare. In questo esempio usiamo una versione specifica; avremmo potuto usare anche latest per avere l'ultima versione, ma nella pratica reale sconsiglio questa procedura perché, come mi è accaduto più volte, con il passaggio di versioni, si possono riscontrare anomalie che obbligano a mettere mano al tutto. In un primo test che avevo fatto, avevo specificato come tag latest, per poi scoprire, quando era uscita la versione 1.1 di asp.net core, che il progetto non era più compilabile per le differenze di versioni nelle dipendenze. Un altro caso che mi è successo di recente: utilizzando di base un'immagine ubuntu, precedentemente con la versione 14.04.4 era presente nell'immagine un comando per la decompressione di una formato particolare, comando che nell'ultima versione, la 16.04 è stato eliminato; al passaggio a quest'ultima versione di ubuntu il tutto si bloccava con un messaggio inizialmente incomprensibile che poi era basato sulla mancanza di quel comando.

Abbiamo usato spesso come valore nel parametro azdotnet, e questo è il nome che abbiamo dato al nostro container grazie al parametro --name: non assegnandolo noi da comando, docker avrebbe creato uno suo. Se siamo ancora nel terminale connesso a docker, possiamo uscirne con la sequenza Ctrl+P Ctrl+Q. Usando il comando docker ps possiamo vedere informazioni sui container che girano all'interno della nostra macchina:

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES df65019f69a5 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 15 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp azdotnet

Se non avessi specificato il nome mi sarei ritrovato questo casuale goofy_shaw:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 04c3276cfd73 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 10 seconds ago Up 7 seconds 0.0.0.0:5000->5000/tcp goofy_shaw

Vogliamo fermare il processo in questo container (in entrambi i casi)?

docker stop azdotnet ... oppure ... docker stop goofy_shaw

Voglio cancellare il container e l'immagine?

docker rm azdotnet docker rmi azdotnet

Voglio essere distruttivo e cancellare qualsiasi costa presente in docker?

docker rm $(docker ps -qa) docker rmi $(docker images -qa)

La comodità non si ferma qui. Scopro che c'è un piccolo errore? Avendo il volume montato in locale posso fare (se ho l'editor VSCode di Microsoft, ma qualunque editor di testo fa la stessa cosa):

code .

Quindi, salvato il file, posso ricompilare e riavviare, all'interno del terminal in docker, dopo aver fermato con Ctrl+C

dotnet restore dotnet run

E vedere le modifiche. Potremmo creare anche un'immagine già pronta con la nostra app al suo interno. Il modo più semplice è scaricare quella di nostro interesse, salvarci all'interno i file della nostra app, quindi fare il commit delle modifiche e utilizzarla tutte le volte che vogliamo. Oppure, possiamo introdurre nel codice sorgente della nostra web app un file per la creazione automatica dell'immagine per docker. Ecco un esempio che sarà ripreso anche più tardi:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh CMD ["./start.sh"]

Con questo file possiamo creare una immagine con questo comando:

docker build -t myexample/exampledotnet .

myexample/exampledotnet sarà il nome dell'immagine che possiamo usare per richiamare e avviare un container con il suo contenuto. Se provassimo ad avviare questo comando, vedremo che docker scarica, se non già presente, l'immagine di base per il dotnet, quindi, dopo la riga di informazione sul maintainer, verranno copiati i file locali dalla directory ./src/MVC5ForLinuxTest2/ nell'immagine e nel path /app. Lo stesso per il file start.sh. Quindi viene dato il flag di avvio a questo file e quando l'immagine sarà avviata, sarà eseguito proprio questo file. Il suo contenuto? E' questo:

#!/bin/sh cd app dotnet restore dotnet run

Naturalmente avremmo potuto creare già un'immagine compilata, ma questo caso ci aiuterà a comprendere un passaggio importante che vedremo più avanti.

La creazione di immagini non è lo scopo di questo post, quindi vado avanti anche perché la documentazione ufficiale è chiara a riguardo. Voglio proseguire perché il resto è una situazione che si sta evolvendo proprio in questo periodo che sto scrivendo questo post. Avevo trattato nei miei post precedenti il mondo dei micro service e sulla possibilità di distribuirli su più macchine. Proprio nell'ultimo post, prendevo in considerazione i vantaggi di Consul per il discovery dei servizi e altro. In quel periodo mi ero anche interessato sulla possibilità che anche docker potesse fare questo. Con la versione 1.1x, scopro che docker mette a disposizione in modo nativo la possibilità di un cluster di host dove saranno ospitati i vari container. Docker swarm mette a disposizione gli strumenti per fare questo ma solo dalla versione 1.12 il tutto è stato semplificato. Nelle versioni precedenti, dal mio punto di vista, era il delirio: nelle macchine che dovevano gestire il tutto dovevano essere collegate con Consul. Inoltre la configurazione di tutto era macchinosa e, per mie prove dirette, bastava un errore da niente nella configurazione per mandare all'aria il tutto - lo so, la colpa non è in docker ma è mia. Dalla versione 1.12 tutto è diventato banale anche se, lo dico subito, ci si ritrova con un bug subdolo che descriverò tra poco. Innanzitutto, cos'è Docker swarm? Non è nient'altro che la gestione di docker un cluster. Quello che abbiamo fatto prima su una singola macchina con i comandi di base di docker, con Docker swarm lo possiamo fare su più macchine senza preoccuparci di come configurare e il tutto. Tutto è semplice? Sì, eccome! Gli sviluppatori di Docker hanno creato un progetto interessante e veramente sbalorditivo per quello che promette e mantiene (al netto dei bug). Cominciando dall'inizio, cosa dobbiamo avere per mettere in piedi un cluster con Docker swarm? Una o più macchine che gestiranno tutti gli host collegati (nella documentazione ufficiale consigliano N/2+1 dove N è il numero di macchine ma scegliate oltre le sette macchine) definiti manager. Si può provare il tutto anche con macchine virtuali come ho fatto io per l'esempio di questo post ed è pressoché obbligatorio Linux (una qualsiasi distribuzione va bene, sconsigliate macchine Apple e Windows). Nel mio caso avevo due macchine con questi due ip:

192.168.0.15 192.168.0.16

La macchina che termina con 15 sarà il manager e la 16 la worker (di base anche la macchina manager sarà utilizzata per l'installazione dei container, anche se è possibile con un comando fare in modo che essa faccia solo da manager). Su questa macchina, da terminale, avviamo il tutto:

docker swarm init --advertise-addr 192.168.0.15

Se tutto funziona alla perfezione, la risposta sarà:

ReplY: swarm initialized: current node (83f6hk7nraat4ikews3tm9dgm) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Più chiaro di così non si può. La rete swarm è stata creata e possiamo aggiungere macchine manager o worker come descritto nel testo. Sulla macchina 16, dunque, per collegarla, scriviamo:

docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377

Se la rete è configurata correttamente e se le porte necessarie non sono bloccate da firewall (verificare dalla documentazione di Docker swarm, a memoria non le ricordo), la risposta che avremo sarà:

This node joined a swarm as a worker.

Ora dalla manager, la 15, vediamo se è vero:

docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 83f6hk7nraat4ikews3tm9dgm * osboxes1 Ready Active Leader 897zy6vpbxzrvaif7sfq2rhe0 osboxes2 Ready Active

Vogliamo maggiori info tecniche?

docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 2 Server Version: 1.13.0-rc2 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 61 Dirperm1 Supported: true ... Kernel Version: 4.8.0-28-generic Operating System: Ubuntu 16.10 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.445 GiB ...

Perfetto. Ora possiamo installare i nostri container e distribuirli in cluster. Per questo esempio ho creato una semplice web application con una web API che ritorna un codice univoco per ogni istanza. Il codice sorgete è pubblico e disponibile a questo url:
https://bitbucket.org/sbraer/aspnetcorelinux

Iscrivendosi gratuitamente in docker, potremo creare le nostre immagini, inoltre è stata introdotta da poco tempo la possibilità di eseguire build automatiche dai nostri dockerfile salvati in github o butbutcket. Ecco il link per l'esempio di questo post:
https://hub.docker.com/r/sbraer/aspnetcorelinux/

La comodità è che posso modificare il codice sorgente della mia app in locale, fare il commit su bitbucket dove ho un account gratuito, e dopo pochi minuti avere una immagine in docker pronta. E proprio quello di cui abbiamo bisogno per il nostro esempio.

docker service create --replicas 1 -p 5000:5000 --name app1 sbraer/aspnetcorelinux

Il comando in docker ora è leggermente differente. Si nota subito l'aggiunta di service: questo indica a docker che vogliamo lavorare nel cluster di swarm. Il comportamento è quasi simile a quello che abbiamo visto finora ma non potremmo collegarci da terminale nel metodo conosciuto. Prima di approfondire vediamo che è successo:

docker service ls ID NAME REPLICAS IMAGE COMMAND cx0n4fmzhnry app1 0/1 sbraer/aspnetcorelinux

E' stata scaricata l'immagine e sta per essere avviata. Dopo alcuni istanti potremo richiamare l'API e vedere il risultato:

curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

E lo potremo fare da tutte le macchine presenti nel cluster. E se volessimo installare più copie di questa app?

docker service scale app1=5

Ecco fatto: ora docker creerà altre quattro container che saranno distribuiti tra le due macchine:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 6 minutes ago 1cydg0cr7re8suxluh2k7y0kc app1.2 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago

Una volta avviate, vedremo che Docker swarm sarà in grado di bilanciare tutte le richieste (dal guid si può vedere la differenza di processo):

osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

Ora ho voglia di aggiungere una ulteriore informazione nella risposta dell'API. Decido di aggiungere anche data e ora. Creo un nuovo branch, master2, faccio la modifica, commit, creo una nuova immagine in docker con il tag dev. E se volessi aggiornare le cinque istanze che girano sulle mie macchine? Docker swarm fa tutto questo per me:

docker service update --image sbraer/aspnetcorelinux:dev app1

Ora docker fermerà una alla volta i container, lì aggiornerà e li avvierà ancora:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 10 minutes ago 2g98f5qnf3tbtr83wf5yx0vcr app1.2 sbraer/aspnetcorelinux:dev osboxes2 Running Preparing 5 seconds ago 1cydg0cr7re8suxluh2k7y0kc \_ app1.2 sbraer/aspnetcorelinux osboxes2 Shutdown Shutdown 4 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 4 minutes ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago

Lasciandogli il tempo di aggiornare tutto, ecco il risultato:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"e02997ad-ea05-418f-9be4-c1a9b71bff85","dateTime":"2016-11-26T21:14:25.617665+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ff0b1dfa-42e5-4725-ab11-6fdb83488ace","dateTime":"2016-11-26T21:14:27.157971+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"0a7578fb-d7cf-4c6f-a1fa-07deb7cddbc0","dateTime":"2016-11-26T21:14:27.789131+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:28.873405+00:00"}]

Vogliamo fermare il tutto?

docker service rm app1

Interessante, inoltre, è la capacità di docker di prevenire eventuali disastri esterni. Se nel mio caso, spegnessi la macchina 16, docker se ne accorgerebbe, non invierebbe più richieste alle API presenti su quella macchina, e immediatamente avvierebbe lo stesso numero di container persi in quella macchina in altre presenti nel cluster.

E se volessi fare come nel primo esempio, vedere il log di un preciso container? Purtroppo questo non è proprio facile in docker. Innanzitutto è necessario andare sulla macchina dove è installato e scrivere:

docker ps -a

Quindi, come visto nel caso del parametro --name non assegnato, vedere l'output di questo comando per estrapolarne il nome e connettendosi poi. Non proprio comodo.

Tutto magnifico dunque... no, perché come qualcuno può avere intuito, c'è un problema di base nella gestione di cluster di immagini in docker swarm. Abbiamo visto che possiamo scalare un'immagine e automaticamente docker la installerà su quella o su altre macchine; ma, riprendendo l'esempio della nostra API, quando questa sarà disponibile dalla gestione di bilanciamento di docker? E qui nasce il problema: lui manterrà scollegata la macchina in creazione dal load balancer interno fino a quando questa sarà avviata: e il servizio, come in questo caso, è lento a partire perché scarica dipendenze e compila, che succede? Semplice, docker metterà a disposizione quelle macchine che non sapranno gestire la riposta:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ed ecco il primo problema... Come risolverlo? Per fortuna c'è una soluzione semplice. Nel Dockerfile che crea l'immagine, possiamo usare un comando apposito che rende disponibile l'immagine solo se un comando dà risposta positiva. Ecco il nuovo Dockerfile:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh HEALTHCHECK CMD curl --fail http://localhost:5000/api/systeminfo || exit 1 CMD ["./start.sh"]

HEALTHCHECK non fa altro che dire a docker se quel container funziona correttamente, e lo fa eseguendo il comando che verifica se il servizio funziona correttamente - nel mio caso fa una richiesta all'API e se solo risponde positivamente il container viene agganciato al load balancer di docker, questo è comodo anche per verificare che la nostra API non smetta di funzionare per motivi suoi e, in questo caso, può avvertire docker del problema. Perfetto... Proviamo il tutto ed ecco l'output dopo aver aggiunto altre istanze della stessa webapp:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ma che succede? Purtroppo nella versione attuale, la 1.12, il controllo nel caso di cluster swarm non funziona correttamente. Facendo delle prove personali, sembra che l'healthcheck invii la richiesta a tutto il cluster che ovviamente risponderà positivamente... Purtroppo non ci sono scappatoie ma per fortuna questo bug è stato risolto con la versione 1.13 (ora in RC, ma che dovrebbe essere rilasciata entro metrà mese di dicembre). Infatti, installata la versione 1.13, questo problema scompare - ho verificato di persona. A proposito di quest'ultima versione, è stata rilasciata anche la funzionalità di fare rollback dell'immagine in automatico, ma per ora non ho potuto controllare di persona se la cosa funziona. Inoltre - ed era ora - in experimental è stato inserito il comando per la visualizzazione del log all'interno dello swarm.

E' ora di tirare qualche conclusione personale: con la versione 1.13 il tutto è così facile da far impallidire qualsiasi altra scelta per la distribuzione dei propri service; kubernetes di Google appare fin troppo complicato a riguardo, e anche la soluzione che avevo mostrato con Consul sembra molto più macchinosa. Inoltre sembra che il futuro sia nei containers (pure Microsoft in Azure sta implementando il tutto per renderne l'uso più facile possibile). Inutile dire che la procedura che ho descritto qui sopra, funziona perfettamente sia nel caso di una piccola web farm e sia che si decida di passare al mondo del cloud.

Interessante... altro che...

Tags: ,

Continua a leggere Asp.net Core, Docker e Docker Swarm.


(C) 2017 ASPItalia.com Network - All rights reserved


          switches hozzászólása (iTALC telepítése egyszerűen (Linux))   
<strong>switches</strong> Info Blog » iTALC telepítése egyszerűen (Linux)
          Seneca.js, message broker e infine Consul   

Premessa: quanto qui esposto si basa su mie esperienze dirette e, spesso, di miei personali punti di vista, dunque non sparate sul pianista.

Dopo la doverosa premessa, visti i temi trattati, questo post e dedicato alla chiusura della trilogia dedicato ai message broker, iniziato con il primo post sull'argomento, approfondito con il secondo post e infine questo, per proiettarsi poi verso altre mete che l'architettura dei microservice ci obbliga(!?!?) raggiungere.

Un passo indietro: in quei post utilizzavo un message broker come RabbitMQ per la gestione dei messaggi. Se si ricorda quanto scritto, la parte più complicata del loro utilizzo era la gestione asincrona obbligatoria che aveva il tutto: invio del messaggio/evento di ricezione della risposta. Per il mondo del Framework .net di Microsoft mi ero creato una libreria che facilitava questo lavoro mentre per la sua origine di per sé asincrona di node.js, il tutto era più semplice - rimando al secondo post per spiegazioni dettagliate.

Visto che in quest'ultimo periodo sto parecchio tempo su node.js, ho avuto il piacere di provare/usare il framework seneca.js. Brevemente, questo framework permette, con la configurazione di base, di usare le RPC - remote procedure call - in modo quasi banale tra processi sulla stessa macchina (si vedrà tra poco come superare questo limite).

Prendendo gli esempi di base, in una directory creata ad hoc, su una macchina su cui sono installati e funzionanti sia node.js che npm, si installa seneca.js con un unico comando:

npm install seneca

Quindi creiamo lo script dove avremo la funzione, o le funzioni, da richiamare:

var seneca = require('seneca')() seneca.add({role: 'math', cmd: 'sum'}, function (msg, respond) { var sum = msg.left + msg.right console.log(">> "+sum); respond(null, {answer: sum}) }) seneca.listen();

La prima riga istanzia il framework, mentre la seconda riga definisce il metodo che sarà possibile richiamare da remoto. E' possibile inserire tutti metodi che vogliamo definendoli come stringa, come qui sopra. Per esempio:

{role: 'math', cmd: 'sum'} {role: 'math', cmd: 'avg'} {role: 'string', cmd: 'len'}

L'uso di role e cmd è solo una convenzione data dalla documentazione di questo framework, nulla ci vieta di scrivere:

{class: 'math', method: 'sum'} {class: 'math', method: 'avg'} {namespace:'system', class: 'string', cmd: 'len'}

Infine, per poter richiamare il metodo definito nell'esempio qui sopra, in un altro file, inseriamo questo codice:

var seneca = require('seneca')() seneca.client(); seneca.act({role: 'math', cmd: 'sum', left: 1, right: 2}, function (err, result) { if (err) return console.error(err) console.log(result) })

add viene utilizzato per inserire i metodi disponibili in seneca.js, act per poter richiamare tali metodi. Eseguito questo codice il funzionamento sarà molto semplice: seneca.js cercherà il metodo che vogliamo chiamare, se è disponibile invia la richiesta al processo prima avviato, e una volta ricevuta risposta, sarà utilizzata come parametro per richiamare una nostra funzione - nell'esempio viene richiamato il comando prima definito che esegue la somma di due numeri.

Si può notare una stranezza: nella definizione del metodo abbiamo usato:

{role: 'math', cmd: 'sum'}

Mentre per richiamarlo:

{role: 'math', cmd: 'sum', left: 1, right: 2}

Anche se differente, come ha fatto seneca.js a riconoscerlo? Questo framework utilizza patrun (pattern-matching library) il cui autore è lo stesso di seneca.js. Patrun è in grado di riconoscere non solo i pattern uguali, ma anche quelli che assomigliano ed è in grado di valutare qualche si questi si avvicini di più come somiglianza. Tornando agli esempi di definizione usati prima:

{class: 'math', method: 'sum'} {class: 'math', method: 'avg'}

Il pattern:

{role: 'math', cmd: 'sum', left: 1, right: 2}

Avrà due similitudini su quattro con il primo pattern (definizione di "math" a "method") e solo una con la seconda (definizione di "math"), di conseguenza seneca.js userà il primo metodo.

Come detto, questo framework permette la comunicazione di processi sulla stessa macchina ma possiamo superare questo limite aggiungendo poche righe di configurazione:

var seneca = require('seneca')() seneca.add({role: 'math', cmd: 'sum'}, function (msg, respond) { var sum = msg.left + msg.right console.log(">> "+sum); respond(null, {answer: sum}) }) seneca.listen({ type: 'http', port: '8000', host: '192.168.0.4', protocol: 'http' });

In listen ho aggiunto il tipo di connessione che seneca.js permetterà, specificando anche la porta e l'ip della macchina (192.168.0.4 è una VM sulla mia macchina di test). Il client, di conseguenza, subirà una modifica simile:

var seneca = require('seneca')() seneca.client({ type: 'http', port: '8000', host: '192.168.0.4', protocol: 'http' }); seneca.act({role: 'math', cmd: 'sum', left: 1, right: 2}, function (err, result) { if (err) return console.error(err) console.log(result) })

Questa volta è definito in client dove andare a cercare il metodo, ed avviato il tutto, sulla seconda macchina la risposta.

Fin qui tutto semplice e utile. Ma seneca.js ha un'altra feature molto comoda: si può interfacciare con RabbitMQ. E' sufficiente installare un plugin con npm:

npm install seneca-amqp-transport

Ed ecco il codice che fornisce la funzione sum:

require('seneca')() .use('seneca-amqp-transport') .add({role: 'math', cmd: 'sum'}, function (msg, respond) { var sum = msg.left + msg.right console.log(">> "+sum); respond(null, {answer: sum}) }) .listen({ type: 'amqp', pin: 'role:math', url: 'amqp://link_rabbitmq_service', "exchange": { "type": "topic", "name": "seneca.topic", "options": { "durable": true, "autoDelete": true } }, "queues": { "action": { "prefix": "seneca", "separator": ".", "options": { "durable": true, "autoDelete": true } }, "response": { "prefix": "seneca.res", "separator": ".", "options": { "autoDelete": true, "exclusive": true } } } });

E la versione client:

var client = require('seneca')() .use('seneca-amqp-transport') .client({ type: 'amqp', pin: 'role:math', url: 'amqp://link_rabbitmq_service', "exchange": { "type": "topic", "name": "seneca.topic", "options": { "durable": true, "autoDelete": true } } }); client.act({role: 'math', cmd: 'sum', left: 1, right: 2}, function (err, result) { if (err) return console.error(err) console.log(result) });

La versione server presenta molti parametri per la configurazione della connessione a RabbitMQ e per i nomi che la queue e l'exchange avranno. Inoltre si deve porre attenzione alla definizione del parametro pin che dev'essere uguale a quello definitivo all'interno dell'add di seneca.js. Il bello di questo approccio, che potremo collegare un qualsiasi numero di processi server e client, e le richieste e la gestione di tutto sarà tranquillamente gestito e distribuito da RabbitMQ.

Ma... la direzione è corretta?

Questa è la domanda che mi sono posto dopo aver sviluppato alcuni miei processi/service. Per mia curiosità e perché lo trovo un'architettura molto interessante, sto rivolgendo la mia attenzione da parecchio tempo verso i microservices. Facendo mie prove personali e leggendo in giro materiale, mi sono ritrovato di fronte al dilemma della metodologia da utilizzare per il trasporto delle informazioni tra i vari service. La cura definitiva l'avevo trovata con i message broker come RabbitMQ, ma - ripeto - è la strada giusta? Se si segue meticolosamente l'architettura dei microservices... no. Perché un applicativo basato sui microservices sia effettivamente realizzato in modo ottimale NON ci deve essere nulla di centralizzato. Le decine/centinaia di microservices che svolgono le loro operazioni non dovrebbero dipendere da nessun servizio centralizzato: ogni processo può essere spento e riavviato senza che la cosa si ripercuota sugli altri processi. E nel caso di RabbitMQ? Utilizzandolo stiamo centralizzando il sistema di messaggistica, e seguendo le regole architetturali dei microservices, questo è (relativamente) sbagliato - centralizzando il sistema di messaggistica rendiamo l'architettura sensibile al suo malfunzionamento (lasciando perdere la creazione di cluster di server dediti solo a RabbitMQ). Inoltre, questo me ne sono accorto per mia esperienza diretta, maggiore è la granularità dei microservices e maggiore è l'utilizzo di messaggi e carico per il message broker. Se ci si pensa non è una cosa da poco visto che, finché si hanno service con maggiori responsabilità (lasciate che usi questo termine) lo scambio di messaggi tra i vari service rimane contenuto, ma in un'archittetura prettamente microserices, il numero di messaggi scambiati sale vertiginosamente. E pensiamo solo a tutti i passaggi che essi comportano nel caso il service A richiede un semplice dato al service B:

  • 1) Service A invia messaggio di richiesta al message Broker (prima richiesta via rete).
  • 2) Il message broker riceve la richiesta, quindi invia l'ok della richiesta ricevuta al service A (seconda richiesta via rete).
  • 3) Il message broker inserisce la richiesta nella queue dedicata e controlla che ci sia qualche processo remoto in grado di elaborare la richiesta. Appena trovato invia la richiesta al servizio B (terza richiesta via rete).
  • 4) Il service B riceve la richiesta e comunica al message broker che è stata ricevuta (quarta richiesta via rete).
  • 5) Il service B ha pronta la risposta, contatta il message broker inviando alla queue desiderata la risposta (quinta richiesta via rete).
  • 6) Il message broker, ricevuta la risposta, comunica al service B che è arrivata (sesta richiesta via rete).
  • 7) Il message broker invia al service A, grazie ad un'altra queue, la risposta (settima richiesta via rete).
  • 8) Service A, finalmente, ha avuto la risposta e comunica al message broker che è tutto ok (ottava richiesta via rete).

Otto trasmissioni di dati via rete. E questa per ogni richiesta. E se fossero decine per ogni operazione di base della nostra procedura? Ipotizzando di voler mostrare una pagine web di un sito di e-commerce, potremo dividere tutte le operazioni, per generare la pagina, in questa sequenza di richieste a singoli service:

  • 1) Richiesta info dell'utente attualmente autenticato (microservice users).
  • 2) Richiesta elenco dei prodotti nel carrello (microservice user_products).
  • 3) Richiesta elenco prodotti ricercati dall'utente (microservice products).
  • 4) Richiesta disponibilità dei singoli prodotti in magazzino (microservice warehouse).
  • 5) Richiesta valutazioni dei prodotti (microservice rating).
  • 6) Richiesta numero di commenti (microservice comments).
  • 7) Richiesta banner dedicati (microservice banner).
  • 8) Richiesta div per le news da visualizzare in un div a inizio pagina (microservice news).
  • ...

Tralasciando la possibile suddivisione che ognuno di questi servizi potrebbe avere, usando questo minimo di richieste, avremo, come minimo, 64 trasmissioni in rete di informazioni tra i nostri sistemi e il message broker. Forse un po' dispendioso... c'è un altro modo? Sì, ed è pure banale: RESTful api. Innanzitutto sono semplici da creare sia nel mondo di node.js sia nel mondo di asp.net. Inoltre permettono la ricezione dei dati nel formato a noi più congegnale (json o xml) e per essere richiamate e avere il risultato non hanno tutto il round trip visto prima: il service A richiama il service B attraverso una api rest, fine esecuzione. Semplice. Ma... come possiamo rendere il tutto scalabile? Su n server installiamo il nostro microservice che attende le richieste via http; come rendiamo questi service scalabili? La prima risposta che viene in mente, e come consigliato da un amico, è usare un load balancer (si può usare nginx) che fa da gateway per il nostro service ed è in grado di bilanciare le richieste. Perfetto, problema risolto! Ne siamo sicuri? Ricapitolando: abbiamo escluso un message broker perché centralizzava la trasmissioni di messaggi, e vogliamo risolvere il problema con un altro servizio che centralizza la trasmissione dei dati (SM sta per microservices e LB per load balancer)?

Anche con un cluster di load balancer la situazione non cambierebbe. Dunque? Una possibile soluzione la si può avere con l'uso dei service discovery. Il loro scopo è semplice: raccogliere, monitorare e rendere disponibili una porta unica di accesso per i servizi di cui abbiamo bisogno. Possiamo registrare le nostri api rest al loro interno, e chiunque potrà richiedere l'url per potervi accedere. Inoltre, un message broker, permette anche di controllare (in modo automatico o con script mirati) se ci sono problemi e in caso bloccare l'accesso alla api non funzionante/non accessibile. Nei miei studi ho preso come riferimento Consul. Questo service discovery mette a disposizione quanto detto sopra e molto altro (tra cui anche un'interfaccia web dove è possibile controllare lo stato di tutti i nostri servizi). Nel suo utilizzo più semplice, possiamo comunicare con Consul via api rest per richiedere i servizi disponibili e quant'altro.

Dopo queste poche righe, si potrebbe avere lo stesso dubbio di prima: ma non è anche questo una strumento che centralizza il tutto visto che per sapere dove sono le api rest, dobbiamo chiedere a lui dove sono? Anzi peggio: se uso un servizio, ogni volta devo richiedere al service discovery quale url devo chiamare? Un round trip quasi peggiore di quello che avevamo con il message broker. Ma il bello sta proprio qui: un service discovery come Consul usa una strategia più furba e molto interessante: semplificando, questo tool modifica il DNS della macchina in modo che quando chiamiamo un service, per la risoluzione del nome, interroghiamo anche Consul che ci indirizzerà al servizio attivo più vicino e, se è disponibile più di uno, bilancerà le richieste tra tutti i disponibili. L'approccio è proprio questo: su ogni macchina su cui vogliamo sfruttare questo servizio dobbiamo attivare Consul versione client, il quale comunicherà con uno o più server dedicati che scambieranno informazioni sui servizi raggiungibili o meno. Il servizio di Consul è sì centralizzato, ma il vero lavoro lo fanno i servizi di service discovery sulle singole macchine. Ecco come saranno esposti e visibili i nostri service:

Qui di seguito, anche per mia nota personale, esporrò un esempio di configurazione su più macchine di alcuni esempi di api rest scritte in node.js (il cui contenuto non è di alcun interesse: ritorna un banale messaggio e l'ip attuale della macchina su cui sta girando il processo) e di Consul. Userò delle virtual machine con linux; solo alla fine tratterò anche Windows perché merita una spiegazione a parte. Innanzitutto, nel mio caso ho 3 VM con questi IP:

192.168.0.4 192.168.0.5 192.168.0.6

Sulla 192.168.0.4 installerò il server principale di Consul più un service in node.js; sulla 192.168.0.5 installerò Consul come client e anche qui avrà un service in node.js uguale al precedente (per simulare un load balancing), e sull'ultima macchina installerò solo Consul perché possa richiamare il service dalle due macchine. Per dettagli tecnici rimando il link ufficiale https://www.consul.io/.

192.168.0.4: iniziamo creando in file di configurazione dove inserirò la definizione del mio service in node.js:

{ "services": [ { "id":"web1", "name": "web", "tags": ["xxx"], "address": "192.168.0.4", "port": 1337, "enableTagOverride": false } ] }

L'id è univoco perché definisce il nome del servizio nel dettaglio (possiamo inserire nella lista tutti i service che vogliamo) e name, invece, definisce il nome e se questo viene utilizzato anche su altre macchine, viene usato per creare i servizi simili che vogliamo mettere sotto load balancer. Il tag in questo caso è inutile, l'address specifica l'ip della macchina (attuale in questo caso) raggiungibile dalle altre macchine (qui ritornerò a breve). Port è la porta cui il nostro servizio risponde e l'ultimo parametro lavora in copia con tags e, come scritto sopra, per ora non ci serve.

Fine, ora salviamo questo file in /etc/consul.d/web.json (possiamo inserirlo dove vogliamo, la documentazione consiglia qui anche perché è la directory dove solitamente vengono installati i servizi). Quindi scaricato l'eseguibile, lo possiamo avviare da terminale in questo modo:

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=agent-one -bind=192.168.0.4 -config-dir /etc/consul.d

Come parametro specifichiamo che è la versione server che vogliamo utilizzare, quindi le directory dove salverà i suoi dati, il nome (agent) e il file di configurazione dei servizi prima definito. Avviato, nel terminale apparirà una lunga sequenza di operazioni e se non ci sono errori il tutto si bloccherà lì.

Come scritto prima, Consul si interfaccia direttamente con il DNS della macchina. Nel caso della VM si tratta di dnsmasq (possiamo usare anche Bind). Aggiungiamo anche il file 10-consul (come da documentazione) in /etc/dnsmasq con questo contenuto:

# Enable forward lookup of the 'consul' domain: server=/consul/127.0.0.1#8600

Questo comunica al DNS della macchina che dovrà fare il forward delle richieste anche a Consul. Ok, abbiamo finito. Passiamo alla seconda macchina all'ip 192.168.0.5. Il DNS dev'essere configurato anche qui come abbiamo appena visto. Creiamo il file web.json come prima ma con questo contenuto:

{ "services": [ { "id":"web2", "name": "web", "tags": ["xxx"], "address": "192.168.0.5", "port": 1337, "enableTagOverride": false } ] }

Notare l'ip differente e l'id. Avviamo Consul con questo comando:

consul agent -data-dir /tmp/consul -node=agent-two -bind=192.168.0.5 -config-dir /etc/consul.d

Il terminale visualizzerà varie informazioni ma l'ultima riga sarà un errore perché non potrà connettersi al servizio Consul principale. Dobbiamo in qualche modo fare il join di questa macchina, e lo facciamo dalla macchina 192.168.0.4 con questo comando:

consul join 192.168.0.5

Immediatamente, la seconda macchina visualizzerà un messaggio di effettivo collegamento.

Passando all'ultima macchina, 192.168.0.6, si fanno tutti i passaggi come nella macchina precedente, tranne la scrittura del file web.json perché questa macchina non esporrà nessun servizio ma li utilizzerà. Il comando per avviare Consul sarà come il precedente ma senza la definizione del file di configurazione:

consul agent -data-dir /tmp/consul -node=agent-three -bind=192.168.0.6

Dalla macchina 192.168.0.4:

consul join 192.168.0.5

E sempre da questa, da terminale, scriviamo: consul members per avere questo risultato:

Ok, abbiamo finito, ma a quale nome risponderà il nostro servizio? Di base Consul creerà il tutto sotto il dominio .service.consul (è personalizzabile). Avendo definito nel file web.json il nome "web", dal terzo browser potremo richiamare via browser o da terminale con curl, questo url:

http://web.service.consul:1337

Ora dalla terza macchina, 192.168.0.6, possiamo verificare lo stato dei servizi con un link:

curl web.service.consul:1337

Ecco un esempio di output:

Se richiamiamo questo link da una macchina con lo stesso servizio, sarà sempre eseguito quello sulla stessa macchina e solo se questo sarà fermato, Consul invierà la richiesta su una macchina esterna.

Nella definizione del servizio nel file web.json avevo scritto che si doveva specificare l'ip della macchina in rete perché, se avessi definito in questo modo:

{ "services": [ { "id":"web1", "name": "web", "tags": ["xxx"], "address": "127.0.0.1", "port": 1337, "enableTagOverride": false } ] }

Una macchina esterna, alla risoluzione del nome "web.service.consul", sarebbe stato inviato all'ip 127.0.0.1 con le conseguenze facilmente intuibile (l'ho voluto specificare perché era stata una mia disattenzione iniziale).

Ora passiamo al mondo Windows. Per unire la nostra macchina windows a Consul con un mia api rest, ho scritto:

consul agent -data-dir ./tmp/consul -node=agent-windows -bind=192.168.0.2 -config-dir ./services

Consul funziona anche su questo sistema operativo, e in effetti io avevo creato con asp.net core una api rest esposta con kestrel, che veniva chiamata e trovata senza problemi da Consul (richiamando l'url http://web.service.consul veniva messa in load balancing anche l'api su windows), ma il problema è come fare il forward lookup del DNS di windows in modo che anche esso possa risolvere il nome dei nostri servizi in Consul. Il problema è che si deve installare un servizio DNS sulla macchina e poi configurarlo, ma qui mi sono scontrato contro dei miei limiti di configurazione e il tempo era quello che era - è estate pure per me!

Quanto ho scritto? Troppo, lo so. Conclusioni? Dico quello che penso io per quanto può valere - questo è il mio blog dove posso scrivere ciò che mi va, no? Innanzitutto se si sposa il mondo dei microservices si possono affrontare e utilizzare qualsiasi infrastruttura vogliamo: vogliamo usare un message broker? Vogliamo usare un service discovery? Non ho mai negato che il mondo dei message broker (con RabbitMQ per esempio) mi è sempre piaciuto (e non ci avrei dedicato due post su questo blog se fosse l'opposto). Da qualche tempo, dopo i primi dubbi, sto affrontando il mondo dei service discovery, e dopo le primi titubanze mi sta convincendo sempre più. Da quando ho provato a scrivere codice utilizzando questo mondo, mi sono ritrovato molto a più agio scrivendo api rest che chiamate ad un message broker. Il codice è più semplice, è meglio testabile (si può testare velocemente un service anche da browser). Studiando in contemporanea altre novità come asp.net core, infine, mi sono ritrovato con kestrel che sembra fatto apposta per aprire l'asp.net al mondo dei microservice (è banale creare una api rest e lanciare una istanza di kestrel che rimarrà in attesa delle richieste, senza scomodare IIS, mini IIS, o simil web server). Creando un nuovo progetto in Visual Studio utilizzando lo scheletro creato per le web api, è sufficiente modificare in program.cs il codice seguente:

public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseUrls("http://*:1337") .UseStartup() .Build(); host.Run(); }

Ed ecco il service bello disponibile per il nostra applicazione basata su microservice. Posso dirlo? Mi aggrada.

Tags: ,

Continua a leggere Seneca.js, message broker e infine Consul.


(C) 2017 ASPItalia.com Network - All rights reserved


          Divide et impera con c# e un message broker   

Questo lo voglio condividere. Qualche tempo fa si discuteva sul semplice paradigma informatico divide et impera e il suo approccio reale. Di base non si basa su niente di difficile: avendo un compito X per risolverlo lo si suddivide in compiti più piccoli, e questi in più piccoli ancora e così via, in modo ricorsivo. Un esempio reale che usa questo paradigma è il famoso quicksort che, preso un valore medio, detto pivot, al primo passaggio suddivide gli elementi dell'array da ordinare a sinistra se più piccoli del pivot, a destra se più grandi. Dopodiché, questi due sotto array, sono ancora suddivisi, il primo con un suo valore medio, il secondo con un altro valore medio; quindi si ricomincia a la suddivisione spostando gli elementi da una parte all'altra a seconda del valore del pivot: alla fine di questo passaggio saranno quattro gli array. Se l'ordinamento non è completo, si dividono ancora questi quattro array in otto più piccoli, ognuno con il suo pivot medio e con il dovuto spostamento da una parte o dall'altra... così fino a quando l'array è ordinato (maggiori info alla pagina di wikipedia dove è presente anche in modo grafico tale algoritmo). Prendendo il codice di esempio dalla pagina di Wikipedia, posso costruire la versione in C#:

static List QuickSortSync(List toOrder) { if (toOrder.Count <= 1) { return toOrder; } int pivot_index = toOrder.Count / 2; int pivot_value = toOrder[pivot_index]; var less = new List(); var greater = new List(); for (int i = 0; i < toOrder.Count; i++) { if (i == pivot_index) { continue; } if (toOrder < pivot_value) { less.Add(toOrder); } else { greater.Add(toOrder); } } var lessOrdered = QuickSortSync(less); var greaterOrdered = QuickSortSync(greater); var result = new List(); result.AddRange(lessOrdered); result.Add(pivot_value); result.AddRange(greaterOrdered); return result; }

Anche se non è ottimizzata, non importa per lo scopo finale di questo post: il suo sporco lavoro lo fa ed è quanto basta. Eseguito, si potrà vedere l'array di numeri interi prima e dopo l'ordinamento:

Per migliorare tale versione potremmo utilizzare chiamate asincrone e più thread: in fondo già la prima suddivisione prima spiegata con due parole, che ritorna due array, può essere elaborata con due thread separati, ognuno che elabora il suo sotto-array. E alla divisione successiva, potremo utilizzare altri thread. Avendo a disposizione un microprocessore con più core, avremo immediatamente vantaggi prestazionali non indifferenti se confrontati con la versione mono-thread prima esposta. Per un approccio multithread ho parlato già in modo esteso in questo mio altro post, e in questo portale potete trovare molte altre informazioni. Ovviamente sempre la cura per ogni male la possibilità di utilizzare tutti i core della propria macchina e una moltitudine di thread paralleli. Ma fino a quando si può estendere oltre questi limiti? I thread non sono infiniti così come i core di una cpu. Spesso alcuni novizi - lasciatemi passare tale termine - pensano che l'elaborazione parallela sia la panacea per tutti i problemi. Ho molte operazioni da svolgere in parallelo, come posso risolverle da programmazione? Semplice, una marea di thread paralleli - e prima del Framework.Net 4 e dei suoi Task e dell'async/await del Framework 4.5 - sembrava una delle tecniche più facili da utilizzare e forse anche più abusate. Alla prima lettura, spesso il neofito, alla seguente domanda sbaglia:

Ipotizzando di avere una cpu monocore (per semplificare), e dovendo eseguire N operazioni un nostro programma ci mette esattamente 40 secondi. Se questo programma lo modificassi per potere usare 4 thread paralleli, quanto tempo ci metterebbe questa volta ad eseguire tutta la mole di calcoli?

Se si risponde in modo affrettato, si potrebbe dire 10 secondi. Se sapete come funziona una cpu e i suoi core e non vi siete fatti ingannare dalla domanda, avrete risposto nel modo corretto: ~40 secondi! La potenza di calcolo di una cpu è sempre quella e non è suddividendola in più thread che si ottengono miracoli. Solo con 4 core avremo l'elaborazione conclusa in 10 secondi.

Ma perché questa divagazione? Perché se dovessimo estendere il paradigma Divede et impera ulteriormente al programmino di ordinamento qui sopra perché potesse superare, teoricamente, le limitazioni della macchina (cpu e memoria), quale strada si potrebbe - ripeto - si potrebbe intraprendere? La soluzione è facilmente intuibile: aggiungiamo un'altra macchina per la suddivisione di questo processo; non basta ancora? Possiamo aggiungere tutte le macchine e la potenza necessaria per l'elaborazione.

Per risolvere questi tipi di problemi e per poter poi avere la possibilità di estendere in modo pressoché infinito un progetto, la suddivisione di un nostro processo in microservice è una delle soluzioni più gettonate, così come conoscere il famoso scale cube di Martin L. Abbott e Michael T. Fisher.

Mettendo da parte il programmino prima scritto per il sort ed estendendo il disco ad applicativi di un certo peso, possiamo definire una piccola web application che funge da blog: visualizzazione dell'elenco dei post, il singolo dettaglio, un'eventuale ricerca e l'utilizzo dei tag. Di base, avremo un database dove salvare i post, quindi una web application con i classici 3 layer per la presentazione, la business logic e il layer per il recupero dei dati. Questo tipo di applicazione, nel cubo, starebbe nel vertice in basso a sinistra. Si tratta di una applicazione monolitica, dove tutte le funzioni sono racchiuse all'interno di un singolo processo. Se volessimo spostarci lungo l'asse X, dovremmo duplicare questa web application su più processi e server. I vantaggi sarebbero subito evidenti: in caso questa applicazione avesse successo e le dotazioni hardware della macchina su cui gira non fossero più sufficienti, l'installazione dello stesso su più server risolverebbe l'aumento di carico di lavoro (tralasciando il potenziamento del database). L'asse Y del cubo è quello più interessante: con esso spostiamo le varie funzioni della web application in piccoli moduli indipendenti. Sempre tenendo come esempio il blog, potremmo usare il layer di presentazione sul web server, ma i layer della busines logic suddividerla in più moduli indipendenti; il primo layer, in questo modo, interrogherà questo modulo o altri per richiedere la lista di post e, appena ricevuta la risposta, ritornerà al client i dati richiesti. Solo per questo punto si può notare un notevole vantaggio: in un mondo informatico sempre più votato all'event driver e all'asincrono - basti notare il notevole successo di Node.js verso cui si stanno spostando pressoché tutti in modo più o meno semplificato, async/await fa parte ormai della quotidianità del programmatore .net - dove nulla dev'essere sprecato e nessun thread deve rimanere in attesa, questo approccio permette il carico ottimale dei server. Sono in vena di quiz, cosa c'è di "sbagliato" (notare le virgolette) nel codice seguente (in c#)?

var posts = BizEntity.GetPostList(); var tags = BizEntity.GetTagList();

Cavolo, sono due righe di codice, che cosa potrebbero avere mai di sbagliato? Ipoteticamente la prima prende dal database la lista dei post di un blog, mentre la seconda la lista dei tag (questo codice potrebbe essere utile per la web application vista prima). La prima lista è usata per visualizzare il lungo elenco di post del nostro blog, la seconda lista per mostrare i tag utilizzati. Se avete spostato ormai la vostra mentalità nella programmazione del vostro codice in modo asincrono o se usate node.js, avrete già capito che cosa c'è di sbagliato in queste due righe di codice: semplicemente esegue due richieste in modo sequenziale! Il thread arriva alla prima riga di codice e qui rimane bloccato in attesa della risposta del database; avuta la risposta, esegue una seconda richiesta e rimane ancora in attesa. Piuttosto, perché non lanciare entrambe le richieste in parallelo e liberare il thread in attesa della risposta? In C#:

var taskPost = BizEntity.GetPostListAsync(); var taskTag = BizEntity.GetTagListAsync(); Task.WaitAll(new Task[] {taskPost, taskTag}); var posts = taskPost.Result; var tags = taskTag.Result;

Ottimo, questo è quello che volevamo: esecuzione parallela e thread liberi per processare altre richieste.

Ritorniamo all'esempio del blog: ipotizziamo a questo punto di voler aggiungere la possibilità di commentare i vari post del nostro blog. Nel caso dell'applicazione monolitica all'inizio esposta, dovremo mettere mano al codice dell'intero progetto, mentre con la suddivisione in moduli indipendenti più piccoli, appunto microservice, dovremo scrivere un modulo indipendente da installare su uno o più server (si ricordi l'asse X), quindi collegare gli altri moduli che richiedono questi dati. Infine l'asse Z si ha una nuova differenziazione, possiamo partizionare i dati e le funzioni in modo che le richieste possano essere suddivise, per esempio, per l'anno o il mese di uscita del post, o se fanno parte di certe categorie e così via... Non si penserà che tutte le pagine archiviate e su cui fa la ricerca Google, sono su un solo server replicato, vero?

Spiegato in teoria il famoso scale cube (con i miei limiti), non ci rimane che rispondere all'ultima domanda: chi è il collante tra tutti questi micro service? Il framework.net mette a disposizione una buona tecnologia per permettere la comunicazione tra processi che siano sulla stessa macchina o su una batteria di server in una farm factory, o che siano in remoto e comunichino con internet. Utilizzando WCF si può passare facilmente tra i web service WSDL standard a comunicazioni più veloci via tcp e così via. Questo approccio ci pone di fronte ad un evidente limite essendo queste comunicazioni dirette: ipotizzando di avere una macchina con il layer di presentazione del blog, per richiedere i post al microservice che gira su un secondo server, deve sapere innanzitutto DOVE si trova (ip address) e COME comunicare con esso. Risolto questo problema in modo semplice (salvando nel web.config, per esempio, l'ip della seconda macchina e usando un'interfaccia comune per la comunicazione) ci troviamo di fronte immediatamente ad un altro problema: come possiamo spostarci lungo l'asse X del cubo inserendo altre macchine con lo stesso microservice in modo che le richieste vengano bilanciate automaticamente? Dovremo fare in modo che il chiamante sia aggiornato continuamente sul numero di macchine con il servizio di cui ha bisogno, con eventuali notifiche di anomalie volute o no: manutenzione del server con la messa offline del servizio, oppure una rottura improvvisa della macchina. Per l'esempio qui sopra, il layer di presentazione dovrebbe avere al suo interno anche la logica di gestione di tutto questo... e così ogni microservice della nostra applicazione... Assolutamente troppo complicato e ingestibile. Perché dunque non delegare questo compito ad un componente esterno come un message broker?

Azure mette a disposizione il suo Microsoft Azure Service Bus, molto efficiente e ottimale nel caso di utilizzo del cloud di Microsoft; nel mio caso le mie preferenze vanno per RabbitMQ, anche perché è l'unico su cui ho lavorato in modo approfondito. Innanzitutto RabbitMQ è un message broker open source completo che permette ogni tipo di protocollo (da AMPQ https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol a STOMP https://en.wikipedia.org/wiki/Streaming_Text_Oriented_Messaging_Protocol) e, soprattutto, possiede client per quasi tutte le tecnologie, dal framework .net, passando per Java, per nodejs e così via. Inoltre è possibile installarlo come server sui sistemi operativi principali: Windows, Linux ad Apple. Se per fare delle prove non si vuole installare nulla sulla propria macchina di sviluppo ci si può affidare ad alcuni servizi gratuiti (e limitati) disponibili in internet. Attualmente CloudAMPQ (https://www.cloudamqp.com/) mette a disposizione tra i pacchetti anche la versione free con un limite di 1.000.000 di messaggi al mese):

Per esigenze di altissimo livello sono disponibili anche piani da centinaia di migliaia di messaggi al secondo in cluster, ma per dei semplici test va più che bene la versione free. Una volta registrati si avrà a disposizione un servizio RabbitMQ completo con tutti i parametri di accesso anche via API Rest sia da classica pagina web:

(User e password non sono reali in questo caso.)

Cliccando sul pulsante arancio "RabbitMQ management interface", si avrà a disposizione il pannello di controllo completo per eventuali configurazioni, come la creazione di code (Queue) e delle exchange (eventuali perché il tutto è possibile anche da codice):

Se non si vuole usare un servizio pubblico si può scaricare direttamente dal sito di RabbitMQ la versione adatta al proprio sistema operativo:

https://www.rabbitmq.com/

Per la versione Windows ho riscontrato in tutte le occasioni che l'ho dovuto installare un problema di avvio del servizio. Per verificare che tutto funzioni è sufficiente andare nel menu start e, dopo aver selezionato la voce: "RabbitMQ Command Prompt", scrivere il comando:

rabbitmqctl.bat status

Se la risposta è un lungo JSON vuol dire che è tutto corretto, altrimenti si noteranno errori di avvio del nodo di RabbitMQ e roba simile. In questi casi il primo passo è controllare il contenuto dei cookie creati da Erlang (da installare insieme a RabbitMQ), il primo è nel path:

%HOMEDRIVE%%HOMEPATH%\.erlang

Il secondo:

C:\Windows\.erlang.cookie

Se sono uguali e il problema sussiste, da terminale precedente avviato in modalità amministratore, avviare questi tre comandi di seguito:

rabbitmq-service remove rabbitmq-service install net start rabbitmq

Se anche questo non funziona, non resta che affidarsi a San Google. Se vogliamo che nella versione installata in locale sia disponibile l'interfaccia web, si devono utilizzare questi comandi:

rabbitmq-plugins enable rabbitmq_management rabbitmqctl stop rabbitmq-service start

Ora sarà possibile aprire un browser e aprire l'interfaccia web con:

http://localhost:15672/

Username: guest, password: guest.

Ora, sia che si sia utilizzato un servizio free in internet, sia che si sia installato il tutto sulla propria macchina, per una semplice prova si può andare nel tab Queues e creare una nuova Queue su cui si potranno inserire e leggere messaggi direttamente da questa interfaccia. Ok, ma cosa sono le code e le exchange? In rabbitMQ (e in qualsiasi altro broker message) ci sono tre componenti principali:

  • L'exchange che riceve i messaggi e li indirizza a una coda (queue); questo componente è facoltativo.
  • Queue, è la coda vera e propria dove sono salvati i messaggi.
  • Il binding che lega una exchange a una coda.

Come detto prima, creando una coda, poi è possibile inserire e leggere i messaggi inseriti da codice. Tutto qua. Niente di complicato. Una coda può avere più proprietà, le principali:

  • Durata: possiamo fare in modo che RabbitMQ salvi i messaggi sul disco, in modo che, in caso di riavvio della macchina, la coda eventualmente in attesa di elaborazione non vada persa.
  • Auto cancellazione: è possibile fare in modo che una coda, appena tutte le connessioni collegate sono chiuse, venga automaticamente cancellata.
  • Privata: una coda accetta come lettore della coda un solo processo; ma chiunque può aggiungere elementi al suo interno.

Come scritto sopra, da codice possiamo connetterci direttamente con una coda, inviare messaggi e altri processi prelevarli ed eventualmente elaborarli. La vera forza dei message broker non si ferma qui ovviamente. L'uso dell'exchange ci permette di scrivere regole per la consegna del messaggi nelle code collegate dal relativo binding. Abbiamo a disposizione tre modi di invio con l'exchenge:

  • Direct: inserendo la routing key diretta il nostro messaggio sarà inviato a quella e solo quella coda collegata all'exchange con quel binding come nella figura sottostante:
  • Topic: è possibile inserire dei caratteri jolly nella definizione della routing key in modo che un messaggio sia inviato a una o più code che rispettano questo topic. Semplice esempio: un exchange può essere collegato a più queue; nel caso fossero due, prima con entrambe la routing key: #.message, inviando un messaggio in modalità topic con una di queste routing key: a1.message, qwerty.message, entrambe le code riceverebbero il messaggio.
  • Header exchange: invece della routing key vengono controllati i campi header del messaggio.
  • Fanout: tutte le code collegate ricevono il messaggio.

Un altro dettaglio da non sottovalutare nell'utilizzo dei message broker è la sicurezza della consegna e della ricezione dei messaggi. Se un log perso può essere di minore importanza e sopportabile (causa il riavvio della macchina o una qualsiasi causa esterna), la perdita di una transazione per una prenotazione o pagamento comporta gravi problemi. RabbitMQ supporta l'acknowledgement message: in questo modo RabbitMQ invia il messaggio a un nostro processo che lo elabora, ma non lo cancella dalla coda finoaché il processo non gli invia un comando per la cancellazione. Se, durante, questo processo muore e cade la connessione tra lui e il message broker, questo messaggio sarà inviato al prossimo processo disponibile.

Per fare una semplice prova da interfaccia, andando nella sezione "Queues" e creiamo una nuova coda dal nome "TestQueue":

Cliccando su "Add queue" la nostra nuova coda apparirà nella lista della pagina. Si possono anche modificare la durata e le altre proprietà della queue prima citate, ma si può lasciare tutto così com'è e andare avanti. Creiamo ora una exchange dalla sezione "Exchanges" dal nome "ExchangeTest" e il type in "Direct":

E ora colleghiamo l'exchange e la queue prima creata. Nella tabella della stessa pagina si noterà che è apparsa la nostra Exchange. Cliccandoci sopra abbiamo ora la possibilità di definire in binding:

Se è tutto corretto, vedremo una nuova immagine che mostra il collegamento.

Ora nella stessa pagina aprire la sezione "Publish message" e inserire la routing key prima definita e del testo di prova. Quindi cliccare su "Publish message":

Se tutto è andato bene, apparirà un messaggio su sfondo verde che avvisa che il messaggio è stato inviato alla queue correttamente. Per verificare andare nella sezione "Queues" e si vedrà che ora la coda avrà un messaggio:

Andando nella parte inferiore della pagina in "Get Message" sarà possibile leggere e cancellare il messaggio.

Ok, tutto semplice e bello... ma se volessi farlo da codice? Di base il modo di comunicazione più semplice è la one-way. In questo caso un processo invierà un messaggio ad una queue e un altro processo leggerà tale messaggio (nel progetto in allegato sono i progetti Test1A e Test1B). Innanzitutto è necessario aggiungere il reference alla libreria RabbitMQ.Client, disponibile in Nuget. Quindi ecco il codice che aspetta i messaggi alla queue (il codice crea automaticamente la queue Example1 e nella soluzione il cui link si trova a fine di questo post, ha come nome Example1A il progetto), innanzitutto il codice per la lettura e svuotamento della coda:

const string QueueName = "Example1"; static void Main(string[] args) { var connectionFactory = new ConnectionFactory(); connectionFactory.HostName = "localhost"; using (var Connection = connectionFactory.CreateConnection()) { var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, true, false, null); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, true, consumer); Console.WriteLine("Wait incoming message..."); while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); if (content == "10") { break; } } } }

ConnectionFactory ci permette di creare la connessione al nostro server di RabbitMQ (in questo esempio con localhost, l'username e password guest sono utilizzate automaticamente). In questo caso in ModelCentralized è specificato il nome della queue, i tre valori boolean successivi servono per specificare se essa è durable (i messaggi sono salvati su disco e recuperati in caso di riavvio), exclusive (solo chi crea la queue può leggerne il contenuto) e autoDelete (la queue si cancella quando anche l'ultima connessione ad essa viene chiusa). Alla fine l'oggetto consumer con la funzione Dequeue interrompe il thread del processo e rimane in attesa del contenuto della queue; all'arrivo del primo messaggio ne prende il contenuto (questa dll per il framework.net ritorna un array di byte) e trasformato in stringa lo visualizza a schermo.

Il codice per l'invio (Example1B):

// Tralasciato il codice uguale all'esempio precedente // fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); Console.WriteLine("Send messages..."); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); byte[] msgRaw; for (int i = 0; i < 11; i++) { msgRaw = Encoding.Default.GetBytes(i.ToString()); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw); } } Console.Write("Enter to exit... "); Console.ReadLine(); }

Il resto del codice sono istanze ad oggetti necessari all'invio dei messaggi (senza impostare alcune proprietà particolare) e trasformato il nostro messaggio in un array di bytes, grazie alla funzione BasicPublish viene inviato effettivamente alla queue QueueName (il primo parametro con una stringa vuota, è il nome dell'eventuale exchange utilizzato; in questo caso inviando il messaggio direttamente alla queue non c'è bisogno dell'exchange). Il codice invia una sequenza di numeri alla coda, e se dopo l'avvio si controlla nell'applicazione web prima vista, si vedrà che la queue "Example1" contiene 11 messaggi.

Il risultato:

Introduciamo l'uso dell'exchange con l'invio dei messaggi con una queue privata. Inoltre impostiamo la coda in modo che siamo noi a inviare l'acknowledgement message. Il codice si complica di poco per la console application in attesa dei messaggi (Example2A).

// Si definisce il nome dell'exchange: const string ExchangeName = "ExchangeExample2"; // Il nome della queue non serve più perché è creata in modo dinamico e casuale da RabbitMQ. // Il codice rimane uguale al precedente fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); string QueueName = ModelCentralized.QueueDeclare("", false, true, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Fanout); ModelCentralized.QueueBind(QueueName, ExchangeName, ""); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer); // Resto del codice per l'attesa dei messaggi e la sua visualizzazione uguale al precedente

Alla definizione della queue con la funzione QueueDeclare si è lasciato il nome vuoto perché sarà RabbitMQ ad assegnarcene uno con nome casuale. Non è importante il suo nome per la ricezione dei messaggi perché un altro processo, per inviarci i messaggi, utilizzerà il nome dell'exchange. ExchangeDeclare fa proprio questo: crea, se non esiste già, un exchange e con il QueueBind è legata la queue con l'exchange. Inoltre viene definito questo exchance come Fanout: qualsiasi queue collegata a questo exchange, riceverà qualsiasi messaggio inviato. C'è una differenza in questo codice con il precedente: ora siamo noi che dobbiamo comunicare a RabbitMQ che abbiamo ricevuto ed elaborato il messaggio, e lo facciamo con questo codice:

string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer);

Il secondo parametro, false, impostiamo il sistema perché siamo noi che vogliamo inviare il comandi di avvenuta ricezione che si completa con la riga successiva:

ModelCentralized.BasicAck(e.DeliveryTag, false);

L'invio dei messaggi non cambia molto se confrontato con il precedente, cambia solo il codice per l'invio:

ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

In questo caso viene specificato il nome dell'exchange e non il nome della queue. Avviati i due processi, la soluzione il risultato sarà uguale al precedente. Ma ora possiamo avviare due istanze del primo programma e vedremo che i messaggi saranno ricevuti da entrambi:

Potremo attivare tutte le istanze che vogliamo: tutte riceveranno i nostri messaggi.

Con l'uso dell'exchange possiamo definire oltre al Fanout visto in precedenza, anche la modalità Topic: in cui possiamo specificare che il collegamento tra un exchange e una o più queue venga attraverso a delle routing key con caratteri jolly. I caratteri jolly sono due: * e #. Ma non permettono la libertà che si potrebbe immaginare. Un errore che può accadere ai novizi e pensare che l'uso dei caratteri jolly debba essere utilizzato nell'invio dei messaggi. Questo è sbagliato: questi devono essere utilizzati nella definizione dell'exchange. Il messaggio inviato dovrà avere sempre una routing key valida (o vuota). Innanzitutto le routing key devono essere definiti come parole separate dal punto. Esempio:

altezza.coloreocchi.genere

Se definiziamo due routing key di collegamento tra un exchange e due queue in questo modo:

basso.*.maschile *.marroni.*

E inviamo questi messaggi con queste routing key:

basso.azzurri.maschile alto.marroni.femminile alto.azzurri.femminile basso.marroni.maschile azzurri.maschile

Il primo sarà inviato solo alla prima queue, il secondo solo alla seconda queue, la terza a nessuna di esse, la quarta ad entrambi. L'ultima, non essendo composta da tre parole, sarà scartata.

Oltre all'asterisco possiamo usare il carattere hash (#):

#.maschile

La differenza è che con l'asterisco il filtro utilizzerà una sola parola mentre l'hash è un jolly completo e include qualsiasi parola o numero di parole al suo posto. La regola qui sopra accetterebbe:

basso.azzurri.maschile marroni.maschile magro.alto.marroni.maschile

L'esempio "Example3A" avvia due thread con due routing key differenti. Il codice è uguale agli esempi precedenti tranne che per queste due righe:

ModelCentralized.ExchangeDeclare(_exchangeName, ExchangeType.Topic); ModelCentralized.QueueBind(QueueName, _exchangeName, _routingKey);

Nella prima specifichiamo che il tipo di exchange è Topic, nel secondo, oltre al nome della queue e al nome dell'exchange, inseriamo anche le seguenti routing key:

*.red small.*

Nell'invio il codice è uguale agli esempi precedenti tranne per questa riga:

ModelCentralized.BasicPublish(ExchangeName, "small.red", basicProperties, msgRaw); ModelCentralized.BasicPublish(ExchangeName, "big.red", basicProperties, msgRaw); ...

Ecco la schermata di ouput:

Se finora si è spinta l'attenzione all'invio dei messaggi con quasi tutte le sue sfaccettature - manca l'exchange con in modalità direct che vedremo nell'esempio successivo e la modalità header che non tratterò - e ora di muoverci nella direzione opposta e prestare maggiore attenzione alla modalità di lettura dei messaggi dalla queue. Con i fonout message e i topic abbiamo visto che possiamo inviare i messaggi a più queue alla volta alla quale è collegato un solo processo... e se collegassimo più processi a un'unica queue? Ecco, siamo al punto più interessante dell'utilizzo dei message broker. La queue quando riceverà i messaggi li distribuirà tra tutti i processi collegati:

Possiamo vedere qui la distribuzione equa di tutti i messaggi tra tutti i processi. L'invio dei messaggi non è nulla di nuovo da quello che si visto finora: si usa il nome dell'exchange (in modalità direct) e una routing key (non obbligatoria); avendo il messaggio in MsgRaw l'invio è semplice:

ModelCentralized.BasicPublish(ExchangeName, RoutingKey, basicProperties, msgRaw);

Un po' di novità sono presenti nell'esempio per la lettura della queue (nel progetto da scaricare è Example 4A). Definiti i nomi della queue, dell'exchange e della routing key:

const string ExchangeName = "ExchangeExample4"; const string RoutingKey = "RoutingExample4"; const string QueueName = "QueueExample4";

... e connessi al solito modo:

var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, false, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Direct); ModelCentralized.QueueBind(QueueName, ExchangeName, RoutingKey); ModelCentralized.BasicQos(0, 1, false);

Nella dichiarazione della queue abbiamo ora specificato il nome, non durable, non exclusive ma con l'autodelete. L'exchange è dichiarata come Direct. La novità e la funzione richiamata "BasicQos". Qui specifichiamo che processo leggerà uno e solo un messaggio alla volta. La lettura dei messaggi avviene allo stesso modo:

var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue();

Dopotutto questo sciolinare le possibilità di RabbitMq e avere visto l'invio e la ricezione dei messaggi, è ora di tornare con i piedi per terra con esempi reali. Un processo che invia un messaggio e un altro, completamente indipendete che lo preleva dalla coda e lo visualizza va bene solo nella dimostrazioni e nella demo: tutt'al più può essere utili nell'invio di messaggi per il log e poco altro. Nel mondo reale un processo ne chiama un altro per richiedere dei dati. Il Request/reply pattern fa al caso nostro. Per ricrearlo con RabbitMQ, negli esempi visti finora, dobbiamo innanzitutto creare una queue pubblica dove inviare le richieste. E ora il problema: come può il processo rispondere al processo richiedente i dati? La soluzione è semplice: il processo che richiede i dati deve avere una propria queue dove saranno depositate le risposte. Finora non siamo andati nel dettaglio dei messaggi ricevuti e avviati da RabbitMq. Possiamo specificare diverse proprietà utili per poi elaborare la comunicazione. Ecco il codice visto fino per l'invio con alcune proprietà aggiuntive:

IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = ... basicProperties.ReplyTo = ...; msgRaw = Encoding.Default.GetBytes(...); ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

MessageId e ReplyTo sono due proprietà stringa liberamente utilizzabili. E' facilmente intuibile che potranno essere utilizzati per specificare, nel caso di ReplyTo, la queue del processo richiedente. E MessageId? Lo possiamo utilizzare per specificare a quale richiesta stiamo rispondendo. Nell'esempio "Example5A" e "Example5B" facciamo tutto quanto detto finora. "Example5A" è il processo che elaborerà i nostri dati, in questo caso una banale addizione matematica. La parte più importante è quella che attende la richiesta e invia la risposta:

var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); IBasicProperties props = e.BasicProperties; string replyQueue = props.ReplyTo; string messageId = props.MessageId; string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); int result = GetSum(content); Console.WriteLine("< {0}", result); var msgRaw = Encoding.Default.GetBytes(result.ToString()); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = messageId; ModelCentralized.BasicPublish("", replyQueue, basicProperties, msgRaw); ModelCentralized.BasicAck(e.DeliveryTag, false);

In questo codice prendiamo il nome della queue del richiedente e il messageId che identifica la chiamata. Utilizzando un nuovo oggetto IBasicProperties (avremo potuto usare lo stesso ma in questo modo si capisce meglio l'utilizzo), impostiamo la proprietà del MessageId e inviamo la riposta al nome della queue prelevata alla richiesta.

Fin qui niente di complicatissimo. La parte più intricata è quella del processo che richiamerà questo servizio perché dovrà nello stesso momento crearsi una queue privata ed esclusiva, e inviare le richieste alla queue pubblica. Non potendo usare una chiamata sincrona (e sarebbe assurdo), utilizzerò due thread, uno che invierà le richieste e un secondo per le risposte. Per gestire le richieste utilizzeremo un dictionary dove sarà salvato il messageId e la richiesta:

messageBuffers = new Dictionary(); messageBuffers.Add("a1", "2+2"); messageBuffers.Add("a2", "3+3"); messageBuffers.Add("a3", "4+4");

Quindi è definito il nome fittizio della queue privata dove il servizio dovrà inviare le risposte:

QueueName = Guid.NewGuid().ToString();

L'invio delle richieste è il seguente (come già visto):

foreach (KeyValuePair item in messageBuffers) { IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = item.Key; // a1, a2, a3 basicProperties.ReplyTo = QueueName; msgRaw = Encoding.Default.GetBytes(item.Value); // 2+2, 3+3, 4+4 ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw); }

E ora il thread per le risposte:

while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); string messageId = e.BasicProperties.MessageId; Console.WriteLine("{0} = {1}", messageBuffers[messageId] , content); ModelCentralized.BasicAck(e.DeliveryTag, false); }

Molto semplice, letta la risposta inviata da RabbitMq, si legge il MessageId con il quale si prende il testo della richiesta per potergli assegnare la corretta risposta (in questo caso è solo per la visualizzazione).

Anche in questo caso possiamo avviare più processi in attesa di essere chiamato. Esso può essere sulla stessa macchina, oppure potrebbe essere dall'altra parte del pianeta: unica regola perché possa rispondere a una richiesta è che sia raggiungibile e collegato a RabbitMQ. A questo punto è facile intuire la potenzialità che avrebbe questa possibile strada: un message broker al centro e una o più macchine collegate a esso su cui girano decine di micro servizi ognuno responsabile di una o più funzioni. Nulla ci vieta di mettere, insieme alla funzione della somma qui sopra esposta, un servizio per la richiesta dell'elenco di articoli per un sito di e-commerce. Possiamo creare un altro micro service per la gestione degli utenti e del loro carrello. E il bello è che possiamo installarli sulla stessa macchina come in una webfarm con decine di server. Inoltre, al crescere delle necessità, potremo installare lo stesso micro service su più server - si ricordano le ascisse del cubo?

Siamo onesti: questa tecnica ha potenzialità incredibili ma ha il classico fascino da demo dinanzi a clienti: bello finché rimane semplice. Alzando anche di poco l'asticella si scoprirà che solo passare da una queue a più queue in una nostra applicazione rende il tutto dannatamente confuso e poco gestibile. Se si osserva solo il codice di Example5A si può notare che, per rendere il codice più breve possibile, ho lasciato il tutto in un unico thread e non in versione ottimale e ciò non aiuta completamente la comprensione dello stesso; inoltre per la gestione più performante l'uso di thread separati per le richieste e le risposte sarebbe consigliabile, così come per la gestione di possibili multi queue. Il consiglio è incapsulare tutte queste funzioni così come ho provato nell'esempio finale che si può trovare nella soluzione con il nome "QuickSortRabbitMQ". Quicksort? Dove avevamo già parlato di esso? Ma certo, all'inizio di questo post. Da esso era nato tutto questo discorso che ci ha fatto spaziare dalla distribuzione di processi sino all’uso di un message broker. Anche se solo per scopo didattico, si immagini che creare un micro service per l'ordinamento di un array di interi. Come visto, il quicksort divide l'array in due sotto array grazie ad un valore pivot. E se questi due sotto array li ripassassimo allo stesso micro service e così via per avere l'array ordinato? Il micro service in questione rimarrà in attesa, grazie a rabbitMQ, di una richiesta diretta a lui o, per meglio dire, alla queue dove lui preleverà l'array da ordinare. Quindi avrà una seconda queue privata, dove aspetterà l'ordinamento degli array che lui stesso invierà alla queue principale. Incasinato, vero? Sì, questo è il modo di richiamare micro service con un message broker in modo ricorsivo, e la ricorsività è quella che abbiamo bisogno per il quicksort.

Spiegando con un esempio, avendo questo array da ordinare, lo inviamo alla queue principale del nostro micro service di ordinamento:

[4,8,2,6] -> Public queue

Al primo giro, il nostro micro service potrebbe dividere l'array in due sotto array con pivot 5, che saranno inviati a loro volto alla public queue:

[4, 2] -> Public queue [8, 6] -> Public queue

Ora due volte sarà richiamato lo stesso micro service che ordinerà i due soli numeri presenti e dovrà restituirli... sì, ma a chi? Semplice, a se stesso... E come? Potremo usare ancora la public queue, ma questo comporta un problema non di poco conto. Se si ricorda l'algoritmo di ordinamento quicksort, lo stesso metodo attende i due array inviati ma questa volta ordinati, quindi deve restituire l'unione dei due array a chi lo aveva chiamato. Quindi dobbiamo tenere traccia dei due array inviati per essere uniti: e come si potrebbe fare se questo micro service avesse più istanze attive e un'array finisse in un processo sulla macchina A e il secondo array in un processo sulla macchina B? Il processo che invia la richiesta DEVE ESSERE quello che riceve le risposte viste ora, e lo possiamo fare solo creando una queue privata nello stesso processo e con l'uso delle property nei messaggi, indirizzare la risposta alla queue corretta.

[2,4] -> Private queue processo chiamante [6,8] -> Private queue processo chiamante

Il processo chiamante rimarrà in attesa anche sulla queue private di entrambe le risposte che unirà prima di restituirla a sua volta al processo chiamante, che potrebbe essere ancora se stesso o un altro.

Già si può immaginare la complessità di scrivere del codice che, utilizzando su più thread, gestisca questo casino. Per semplificare le cose ho creato una piccola libreria che, in completa autonomia, crea i thread di cui ha bisogno e grazie agli eventi comunica con il processo i messaggi provenienti dal message broker. Nel codice della soluzione di esempio è nel progetto "RabbitMQHelperClass". Questa libreria è utilizzata dal progetto "QuickSortRabbitMQ". Siamo arrivati a destinazione: qui troviamo la console application che utilizza questo message broker per la comunicazione delle "porzioni" di array da ordinare con il quicksort. La prima parte è semplice: creato un array di 100 elementi e popolato con numero interi casuali da 1 a 100, ecco che viene istanziata la classe che ci faciliterà il lavoro (o almeno ci prova).

using (rh = new RabbitHelper("localhost")) { rh.AddPublicQueue(queueName, exchangeName, routingKey, false); var privateQueueThread = rh.AddPrivateQueue();// "QueueRicorsiva"); privateQueueName = privateQueueThread.QueueInternalName;

E' presente la funzione per la creazione di una queue public (dove saranno inviate le richieste di array da ordinare). Quindi è creata una queue privata, che sarà usata per restituire al chiamante l'array ordinato.

var privateQueueResultThread = rh.AddPrivateQueue();// "QueueFinale"); privateQueueNameResult = privateQueueResultThread.QueueInternalName;

Qui è creata una ulteriore queue privata: questa sarà utilizzata perché sarà quella che conterrà il risultato finale alla fine del ciclo ricorsivo (a differenza delle queue pubblica che è univoca, la queue privata possono essere più di una). E' ora di collegare gli eventi:

string messageId = RabbitHelper.GetRandomMessageId(); rh.OnReceivedPublicMessage += Rh_OnReceivedPublicMessage; privateQueueThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessage; privateQueueResultThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessageResult;

In messageId è un guid univoca casuale. OnReceivePublicQueue è l'evento che sarà eseguito quando arriverà un messaggio alla queue pubblica prima creata. La stessa cosa per le due queue private con: OnReceivedPrivateMessage. E' ora di far partire la nostra procedura di ordinamento:

var msgRaw = RabbitHelper.ObjectToByteArray>(arrayToSort); rh.SendMessageToPublicQueue(msgRaw, messageId, privateQueueNameResult);

Come visto in precedenza, tutto quello che è trasmesso via RabbitMQ viene serializzato in formato di array di byte. La funzione ObjectToByteArray fa questa operazione (ne riparleremo anche più avanti) e SendMessageToPublicQueue invia l'array da ordinare alla queue pubblica; inoltre viene inviato anche il nome della queue privata che rimarrà in attesa della risposta finale dell'ordinamento completato. Ok, ora la classe che avrà creato per noi thread per l'elaborazione delle queue, riceverà il messaggio e invierà il suo contenuto, con altre informazioni, all'evento prima definito "OnReceivedPublicMessage". Qui è stata riscritta la funzione di ordinamento vista all'inizio di questo post:

private static void Rh_OnReceivedPublicMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; var toOrder = RabbitHelper.ByteArrayToObject>(e.MessageContent); Console.Write(" " + toOrder.Count.ToString()); if (toOrder.Count <= 1) { var msgRaw = RabbitHelper.ObjectToByteArray>(toOrder); rh.SendMessageToPrivateQueue(msgRaw, messageId, queueFrom); return; }

Preso il contenuto dell'array, più le informazioni (l'id univoco del messaggio e la coda a cui dovremo rispondere), si controlla che la sua dimensione sia maggiore di uno, altrimenti manda come risposta, alla queue privata, lo stesso array inviato. Il resto del codice divide dal valore pivot gli elementi dell'array minore e maggiore e invia questi due array alla queue pubblica:

var rs = new RequestSubmited { MessageParentId = messageId, MessageId = RabbitHelper.GetRandomMessageId(), QueueFrom = queueFrom, PivotValue = pivot_value }; lock (requestCache) { requestCache.Add(rs); } var msgRaw1 = RabbitHelper.ObjectToByteArray>(less); rh.SendMessageToPublicQueue(msgRaw1, rs.MessageId, privateQueueName); var msgRaw2 = RabbitHelper.ObjectToByteArray>(greater); rh.SendMessageToPublicQueue(msgRaw2, rs.MessageId, privateQueueName);

RequestSubmited è una classe che contiene solo delle proprietà per l'identificazione della risposta inviata da un altro (o dallo stesso) processo proveniente dal message broker.

Solo quando gli array sono ridotti a una unità viene inviato il tutto alle queue private: Rh_OnReceivedPrivateMessage. Questo evento dev'essere richiamato due volte per le due parti di array divise dal valore pivot. La prima parte di questa funzione non fa altro che aspettare che entrambe arrivino alla funzione prima di essere unite. L'oggetto di tipo RequestSubmited è usato per richiamare i valori dell'id del messaggio e del pivot:

private static void Rh_OnReceivedPrivateMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; ... codice per riprendere entrambe le queue ... infine viene mandato l'array ordinato alla queue privata var msgRaw = RabbitHelper.ObjectToByteArray>(result); rh.SendMessageToPrivateQueue(msgRaw, messageParentId, queueParent); }

Non mi soffermo sul codice che esegue l'ordinamento (che abbiamo già visto) e per recuperare le due code (è un banale controllo su un oggetto List<...>); inoltre il codice sorgente è facilmente consultabile ed è possibile testarlo. Vediamo però l'effetto finale:

Il bello è possiamo attivare più volte questo processo per permettere l'ordinamento in parallelo su più processi, anche su diverse macchine:

Nella window sottostante è visibile solo un'informazione di debug che sono il numero di elementi inviati.

Se ci fosse un premio su come complicare una procedura per sua natura semplice, dopo quest'ultimo codice potrei concorrere per il primo premio senza problemi. In effetti, questo esempio - il quicksort - presenta un grave problema perché possa essere utilizzato con profitto per il calcolo distribuito: il primo tra tutti è che le porzioni di array da ordinare sono da inviare completamente quando, con molto meno e con solo i reference all'array da ordinare, il tutto si sistemerebbe in modo molto più veloce. Ma questo mi era venuto in mente come esempio...

Iniziamo a tirare qualche conclusione: la più semplice è che devo lavorare meglio sugli esempi da portare; la seconda conclusione è che effettivamente il message broker (in questo caso RabbitMQ) esegue molto bene il suo lavoro se decidiamo di abbracciare il mondo dei micro service. Ritornando al cubo, possiamo far comunicare i processi su qualsiasi ascissa in modo veloce e affidabile. Inoltre possiamo fare in modo che anche comunicazione tra processi siamo facilitati anche per l'invio di cambiamenti di configurazione. Tornando a un esempio precedente dove abbiamo utilizzato la comunicazione fanout (senza filtri, a tutte le queue collegate a quell'exchange). E ora pensiamo a una moltitudine di microservice avviati su uno o più server. Di default questi potrebbero avere un proprio file di configurazione utilizzato all'avvio dello stesso: ma cosa succederebbe se dobbiamo cambiare uno di questi parametri? Di default su tutti i processi potrebbe essere inserito il nome di un server ftp dove inserire dei file. In caso si dovesse cambiare l'URI di questo server, che cosa dovremo fare? Modificare tutti i file di configurazione di tutti i processi? E se ce ne dimentichiamo qualcuno? Soluzione più pratica potrebbe essere la predisposizione di un micro service che fa solo questo: tutti i processi, una volta avviati, leggerebbero di default il file di configurazione salvato insieme all'eseguibile, e di seguito potrebbe richiedere a quel microservice la nuova configurazione (che potrebbe sovrascrivere quelle precedente). Oppure ogni microservice potrebbe avere una queue privata collegata a un exchange che potrebbe inviare modifiche alla configurazione in tempo reale. Questo passaggio ci consentirebbe addirittura di inviare il nuovo URI per il server FTP, aspettare che tutti i processi si siano aggiornati, quindi spegnere il primo server tranquilli che nessuno lo sta utilizzando.

Ripetendoci, possiamo fare in modo di scrivere una miriade di microservice per le operazioni più disparate: dall'accesso ad una tabella di un database, all'invio di email, alla creazione di grafici; ogni servizio raggiungibile grazie ad un exchange e, a seconda delle esigenze che potrebbe aumentare, poter collegare più processi che il message broker possa richiamare bilanciando il carico tra tutte le risorse disponibili. E l'interoperabilità? Il message broker non si crea problemi su chi o cosa lo chiami. Potrebbe essere così come un client scritto con il Framework .Net come fatto in questo post, oppure Java... In questo caso, come comunichiamo con oggetti più complessi delle stringhe usate nei primi esempi e inviamo, come nell'esempio del quicksort, oggetti serializzati nativi per una determinata tecnologia?

Vediamo che succede. Nell'Esempio6A proviamo proprio questo: innanzitutto creiamo un oggetto semplice da serializzare come una lista di interi:

var content = new int[] { 1, 2, 3, 4 };

Quindi la inviamo a RabbitMQ con il codice che conosciamo:

IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); var msgRaw = ObjectToByteArray(content); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

ObjectToByteArray è una funzione usata anche dall'esempio del quicksort:

public static byte[] ObjectToByteArray(T obj) where T : class { BinaryFormatter bf = new BinaryFormatter(); using (var ms = new MemoryStream()) { bf.Serialize(ms, obj); return ms.ToArray(); } }

E ora non ci rimane che vedere che cosa succede, una volta inserito questo oggetto in una queue, come leggerlo con un'altra tecnologia. Proviamo con node.js (confesso che questo mio interesse per i message broker sia nato per questa tecnologia e solo dopo l'ho "convertita" al mondo del Framework .Net). Avendo installato sulla propria macchina npm e nodejs, da terminale, basta preparare i pacchetti:

npm install amqp

Quindi un text editor:

var amqp = require('amqp'); var connection = amqp.createConnection({ host: 'localhost' }); // Wait for connection to become established. connection.on('ready', function () { // Use the default 'amq.topic' exchange connection.queue('Example6', function (q) { // Catch all messages q.bind('#'); // Receive messages q.subscribe(function (message) { // Print messages to stdout console.log(message.data.toString()); }); }); });

NodeJs rende il tutto più banale grazie alla sua natura asincrona. Connesso al server di RabbitMQ su localhost, all'evento ready al momento della connessione ci si connette alla queue "Example6" (la stessa queue usata nell'Esempio6A) e con il subscribe si attende la risposta. Lanciamo questa mini app:

node example1.js

Quindi avviamo Esempio6A.exe:

Ovviamente, NodeJs non sa che farsene di quell'array di byte incomprensibili. E la soluzione? Io non so qual è la strada più performante e migliore, ma la più semplice e riutilizzabile per qualsiasi tecnologia è grazie a json. Il codice di invio prima visto lo possiamo trasformare in questo modo:

var json = new JavaScriptSerializer().Serialize(content); msgRaw = Encoding.Default.GetBytes(json); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

E inserito nella queue, una volta letto dall'app in nodejs avremo:

Perfetto, con nodejs l'uso di json è semplicissimo così come con il Framework .Net visto che come possiamo serializzare un oggetto in formato json lo possiamo anche trasformarlo nel suo formato originale. Ci sono soluzioni migliori? Sono disponibile a qualsiasi consiglio, come sempre.

E' ora di chiudere. Conclusioni? Non ce ne sono. Un message broker semplifica la struttura di app basate su microservice. E' l'unica scelta? No. Avrei voluto continuare questo discorso con la comunicazione tra microservice grazie a Redis dell'Italiano Salvatore Sanfilippo, ma la mia conoscenza in merito è ancora deficitaria e mi sono scontrato subito con delle problematiche che non ho ancora risolto (il tempo disponibile è quello che è). Uno dei vantaggi che ho notato immediatamente a confronto di RabbitMQ è la velocità spaventosamente superiore. Forse in futuro affronterò l'argomento su questo blog... sempre se, lo ripeto, la voglia e il tempo me lo permetteranno. Altra soluzione è con l'utilizzo di Akka.Net: anche in questo caso le prestazioni sono superiori e il tutto è più semplificato per la comunicazione di messaggi; il problema grosso in cui mi sono imbattuto subito è la difficile interoperabilità tra diverse tecnologie, ma la mia conoscenza da novizio non mi hanno fatto andare oltre le basi. Ok, basta così.

Tutto il codice di esempio è disponibile qui:

https://bitbucket.org/sbraer/quicksort-with-rabbitmq/overview

Tags: ,

Continua a leggere Divide et impera con c# e un message broker.


(C) 2017 ASPItalia.com Network - All rights reserved


          Vera Rubin's Dark Matter is a hoax ! Since 2012 I wait for Stacy Mc Gaugh post me his answer! He prefers to believe in the lucubrations of Vera Rubin and Mordehai Milgrom   


by Yanick Toutain
REVACTU
December 5, 2012


READ

MERCREDI 14 DÉCEMBRE 2016

I have been patient since April 14, 2012 that Stacy Mc Gaugh answers me: whether or not he verified the statements of Vera Rubin?
Does Newton's laws prohibit high velocities for the peripheral stars of galaxies?
Did or not he verify that Mordehai Milgrom's lucubrations were useless to explain that the centripetal acceleration was maximum for stars farthest from the center of galaxies?
He never deigned to reply!
As Isaac Newton did not have the right qualifications either, he probably would not have answered either.
Too bad!
Especially since his complement to the discoveries of Tully-Fisher have placed him among the greatest discoverers of facts of recent decades.


Je patiente depuis le 14 avril 2012 que Stacy Mc Gaugh me réponde : est-ce que oui ou non il a vérifié les affirmations de Vera Rubin ?
Est-ce que oui ou non il pense toujours que les lois de Newton interdisent des vitesses élevées pour les étoiles périphériques des galaxies ?
Est-ce que oui ou non il a vérifié que les élucubrations de Mordehai Milgrom étaient inutiles pour expliquer que l'accélération centripète était maximale pour les étoiles les plus éloignées du centre des galaxies ?
Il n'a jamais daigné répondre !
Comme Isaac Newton n'avait pas - lui non plus - les bons diplômes, il ne lui aurait sans doute pas répondu non plus.
C'est dommage !
D'autant plus que son complément aux découvertes de Tully-Fisher l'ont placé parmi les plus grands découvreurs de faits des récentes décennies.


MY 2012 POST TO STACY MC GAUGH



Isaac Newton VS Fritz Zwicky-Vera Rubin (EN) No dark matter 2012 03 30 first video test

          snappy sensors   
Sensors are an important part of IoT. Phones, robots and drones all have a slurry of sensors. Sensor chips are everywhere, doing all kinds of jobs to help and entertain us. Modern games and game consoles can thank sensors for some wonderfully active games.

Since I became involved with sensors and wrote QtSensorGestures as part of the QtSensors team at Nokia, sensors have only gotten cheaper and more prolific.

I used Ubuntu Server, snappy, a raspberry pi 3, and the senseHAT sensor board to create a senseHAT sensors snap. Of course, this currently only runs in devmode on raspberry pi3 (and pi2 as well) .

To future proof this, I wanted to get sensor data all the way up to QtSensors, for future QML access.

I now work at Canonical. Snappy is new and still in heavy development so I did run into a few issues. First up was QFactoryLoader which finds and loads plugins, was not looking in the correct spot. For some reason, it uses $SNAP/usr/bin as it's QT_PLUGIN_PATH. I got around this for now by using a wrapper script and setting QT_PLUGIN_PATH to $SNAP/usr/lib/arm-linux-gnueabihf/qt5/plugins

Second issue was that QSensorManager could not see it's configuration file in /etc/xdg/QtProject which is not accessible to a snap. So I used the wrapper script to set up  XDG_CONFIG_DIRS as $SNAP/etc/xdg

[NOTE] I just discovered there is a part named "qt5conf" that can be used to setup Qt's env vars by using the included command qt5-launch  to run your snap's commands.

Since there is no libhybris in Ubuntu Core, I had to decide what QtSensor backend to use. I could have used sensorfw, or maybe iio-sensor-proxy but RTIMULib already worked for senseHAT. It was easier to write a QtSensors plugin that used RTIMULib, as opposed to adding it into sensorfw. iio-sensor-proxy is more for laptop like machines and lacks many sensors.
RTIMULib uses a configuration file that needs to be in a writable area, to hold additional device specific calibration data. Luckily, one of it's functions takes a directory path to look in. Since I was creating the plugin, I made it use a new variable SENSEHAT_CONFIG_DIR so I could then set that up in the wrapper script.

This also runs in confinement without devmode, but involves a simple sensors snapd interface.
One of the issues I can already see with this is that there are a myriad ways of accessing the sensors. Different kernel interfaces - iio,  sysfs, evdev, different middleware - android SensorManager/hybris, libhardware/hybris, sensorfw and others either I cannot speak of or do not know about.

Once the snap goes through a review, it will live here https://code.launchpad.net/~snappy-hwe-team/snappy-hwe-snaps/+git/sensehat, but for now, there is working code is at my sensehat repo.

Next up to snapify, the Matrix Creator sensor array! Perhaps I can use my sensorfw snap or iio-sensor-proxy snap for that.
          The other half of the Jolla story   
They say there are two sides to every coin, and that holds true for the story of the history leading up to Jolla and it's Sailfish OS. The Jolla story usually starts out with Nokia, but it's really a convergence with Nokia as the center point.

This side of the story starts in Norway, not Finland. Oslo, in fact. Not with Nokia, but with a small company named Trolltech.

I won't start at the very beginning but skip to the part where I join in and include a bit about myself. It was 2001, I was writing a Qt based app called Gutenbrowser. I got an email from A. Kozak at Trolltech, makers of Qt. Saying that Sharp was planning to release a new PDA based on Qt, and wouldn't it be cool if Gutenbrowser would be ported to it? I replied, yes, but as I have no device it might be difficult. He replied back with a name/email of a guy that might be able to help. Sharp was putting on a Developer Symposium where they were going to announce the Zaurus and hand out devices to developers. I jumped at the chance.

It was in California. At that time I was in Colorado. Jason Perlow was working for Sharp at that time, and said he had an extra invite to the Developer symposium. WooHoo! The Zaurus was going to run a Qt based interface originally named QPE, later named Qtopia (and even later renamed Qt Extended). The sdk was released, so I downloaded it and started porting even before I had a device to test it on.

Qtopia was open source, and it was available for developers to tinker with, and put on other devices. There was a community project based on the open source Qtopia called Opie that I became involved with. That turned into me getting a job with Trolltech in Australia, where Qtopia was being developed, as the Qtopia Community Liaison, which luckily later somehow turned into a developer job.

Around the time that Nokia came out with the Maemo tablets, I was putting Qtopia on them. N770, N800, N810, and N900 all got the Qt/Qtopia treatment. (Not to mention the OpenMoko phones I did as well).

Then I was told to flash a Qtopia on an N810 because some Trolls were meeting with Nokia. That became two or three images I had to flash over the coarse of a few weeks. I knew something was up.

Around this time, one of the Brisbane developers (A. Kennedy, I'm looking at you!) had a Creative Friday project to make a dynamic user interface framework using xml. (Creative Friday was something Trolltech did that allowed developers to spend every Friday (unless impending doom of bug fixes/release) of their time on research projects) It was really quite fluid and there was a "prototype" interface running on that N810 as well. It only took a few lines of non c++ code to get dynamic UI's. This would have turned into what the next generation of Qtopia's interface would be made with. It was (and still is) quite amazing.

Then came the news that Nokia was buying Trolltech! Holy cow! A HUGE company that makes zillions of phones wanted to buy little ol' Trolltech. But they already had a Linux based interface - Maemo that was based on Gtk toolkit, and not Qt. WTH!?

Everyone speculated they wanted Trolltech for Qtopia. Wrong. Nokia wanted Qt, and decided to ditch Qtopia. We had a wake for the Qtopia event loop to say our good riddance. All of us in Brissie worried about our jobs.

So our little Trolltech got assimilated into this huge behemoth phone company from Finland. Or was it that Trolltech took over Nokia...? Nokia had plans for Qt that would provide a common toolkit for their massively popular Symbian and new Linux based phones.

The Brisbane office started working on creating the QtMobility API's. Yes, there are parts of Qtopia in QtMobility.

Meanwhile, that creative friday xml interface was still being worked on. It got canceled a few times and also revived a few. That eventually evolved into QML, and QtQuick.
Then came N9 and MeeGo, which was going to use this new fangled dynamic UI. MeeGo was also open source, and it's community version was called Mer and Nemo. Yes, there are parts of Qtopia in MeeGo.

The rest of the story is famous, or rather, infamous now. Nokia made redundant the people working on MeeGo. Later on, all of us Brisbane developers, QA and others were also made redundant. The rest of what I call the Trolltech entity got sold to Digia. The QA server room was packed up and shipped to Digia, who is doing a fantastic job of getting Qt Everywhere!

A few of those guys that were working on MeeGo got together and created a company called Jolla, and created a Linux based mobile OS based on Mer named Sailfish. Yes, there are a few Trolltech Trolls working for Jolla. and yes, there are parts of Qtopia in Sailfish.



          Gutenbrowser   
It has been 7 years since I started the Gutenbrowser project on Sourceforge. I had developed it for at least a year or so before that.
I have started hacking on it again, to fix a few things up. It is still far from what I entirely imagine when I think about what it should be.

In 7 years, the Project Gutenberg when from a few hundred books, maybe even a few thousand, to 10's of thousands. All free.
Now there are devices like the Kindle where you can buy ebooks and read them.
You don't have to buy any books from Gutenberg project. That are all free. Everyone of them.

I will create a new release in the coming weeks, with Linux, Windows and Mac OS X binaries.
You can always get Gutenbrowser in Debian. I am not entirely sure which version it is, but they usually keep it up to date, and occasionally pester me for updates or bug fixes.

I am pretty sure I started working on gutenbrowser 9 years ago. My how things have changed in the last 9 years... blah blah blah....
          Nokia and open source   

Nokia makes some  big news announcement

Ok, so they made an announcement they are buying a controlling percent of Symbian, and will be putting it under an open source license. Specifically the Eclipse Public License (EPL).

What remains to be seen is just how much source will be released as open. The EPL  is very generous to proprietary interests. Allowing contributors to relicense their derived works. Which means, "it" won't stay open source. If it is anything like their Maemo offering, it won't be much. Just enough that PR can call it open source. 

Unfortunely, the EPL is not compatible with all the GPL and LGPL works out there, so not very many of the thousands of really great applications for Linux will be available for this. Unless, of course, an application can be relicensed and uses Qt, which will surely be licensed to allow development on this new EPL Symbian.

As with Nokia news lately, I feel it is generally good and it is great to see such a huge company try and be a better 'citizen'.


          oh ya!   
Andrew Morton (kernel hacker) during his keynote address at linux.conf.au 2005 said when asked what desktop he uses, "I won't use a desktop that developers are so stupid that they try to write gui code in C"
          responding via blog!   
wooT gotta love blog conversations!

A lovely respond to Lorn Potter of Trolltech

I just love that people hold Trolltech to some higher threshold. For some, nothing that TT can do is ever good enough, or that Trolltech has a special, evil GPL, which isn't truly open source somehow, since TT doesn't have public source development repositories.

TT releases Greenphone with some close source and closed kernel drivers and gets backlashed, the Neo is released with closed kernel drivers, and a license that allows closed source and is touted as the first open source linux phone.

umm excuse me! LGPL is not about free and open source software! hello! "Dont lend your hand to raise no flag atop no ship of fools."

One of the differences is that Trolltech is not a hardware company, FIC is. TT contracted the hardware from a vendor and what you see is what Trolltech was given. Neo was designed by FIC.
          Linux基金会执行董事:中国在开源方面存巨大机会   

【TechWeb报道】6月26日消息,近日LinuxCon + ContainerCon + CloudOpen中国(LC3)大会在中国召开。这是这一顶级开源盛会首次落户中国。在此次大会现场,Linux基金会执行董事Jim Zemlin宣布,中标软件和阿里云正式成为Linux基金会会员。

5

Linux基金会执行董事Jim Zemlin

这一场开源界的大会过去几年在北美、欧洲和日本都举办过,这还是其首次在中国落地。对于为何会来到中国,Jim Zemlin介绍称,Linux基金会的中国区成员近些年增长了400%,我们会增加在中国的投入,因为我们觉得中国在开源方面,有机会成为世界的领袖。我们也知道中国经济也是非常具有创新精神的经济体,政府对开源也非常支持,可以看到通过我们协作方式开发出来的软件代码价值,以十亿美元来计,无论是你想到的任何一种计算形式,它都离不开开源,无论你来自哪个国家,来自哪个企业,大家都是从开源当中获得了很大的好处。

Linux是世界计算机历史上一个最大的开源项目。现在很多现代化的基础设施,包括股票交易市场、很多的互联网基础设施、很多嵌入式系统,都是在基于Linux的系统上运行的。

Jim Zemlin表示,“在Linux基金会下面,支持着很多开源的项目,这些开源的项目可能都是全球最重要的开源项目。我们希望围绕着开源项目能够打造生态系统。我们在创建软件,大家通过一种协作的方式,不同的企业来创造软件,来贡献代码。这些代码又被这些企业用来开发新的产品和业务,比如说像百度、腾讯、阿里巴巴这些公司,通过使用这些开源的代码,创造出新的产品和业务,赚钱了以后,他们又可以把一部分利润重新投入到这些项目当中。”

据Jim Zemlin透露,明年还会在中国举办同样的活动,目前已经在香港成立了办事处,后续还会加大在中国的投入,也会把有关Linux的培训资料翻译成中文,包括node.js、Kubernetes、OpenStack等培训资料都翻成中文,也是希望能够继续加大在中国的投入,希望中国的开发人员和中国的企业也会加大对我们的支持。(王蒙)

蚕豆网微信


          Principal Software Engineer (Scala) - $100.00 per hour - Incendia - Boston, MA   
Software Engineer, Software Engineering, Linux, Apache, MongoDB, NoSQL, MySQL, OpenTSDB, HBase, CouchDB, Basho Riak, Accumulo/sqrrl, Cassandra, Hadoop, Hive,... $100 an hour
From Incendia Partners - Mon, 19 Jun 2017 21:53:00 GMT - View all Boston, MA jobs
          como actualizar python 2.7 a una 3.2 en ubuntu   

como actualizar python 2.7 a una 3.2 en ubuntu

Respuesta a como actualizar python 2.7 a una 3.2 en ubuntu

Hola, no se sobre que SO trabajas pero debes saber que en todas las distros Linux tienen dependencia de la version 2.X de Python que viene incluida, si estas sobre Windows simplemente desinstala la version que tengas y descargate e instala la nueva, en Linux debes instalar o verificar si ya lo tienes instalado Python 3.X, puedes verificar la version asi:

python3 -V
Si te dice que no existe ningun paquete debes instalarlo, para las distros ...

Publicado el 28 de Junio del 2017 por kip

          Comment on Linux 4.12 receives second release candidate by Linus Torvalds releases last Linux kernel 4.12 RC - Open Source For You   
[…] original schedule of the Linux 4.12 development, we can expect the final release next week. Meanwhile, the final RC update has emerged with just a handful of minor […]
          Open Source: A Job and an Adventure   
At LinuxCon in Düsseldorf, I gave a talk about the many ways that you can turn your work in open source into a career, so that you can get paid for all of the awesome work that you do in open source. If you already have a job in open source, I also talked about … Continue reading Open Source: A Job and an Adventure
          Lessons about Community from Studio Ghibli   
Instead of my traditional Lessons about Community from Science Fiction (video) talk, I decided to do something a little different for LinuxCon Japan. The slides from my Lessons about Community from Studio Ghibli talk are available now (with speaker notes) for my presentation on Wednesday. Abstract Communities are one of the defining attributes that shape … Continue reading Lessons about Community from Studio Ghibli
          Rode Podcaster Microphone   
Rode Podcaster Microphone

Rode Podcaster Microphone

The RØDE Podcaster is a dynamic, end-address USB microphone that combines broadcast-quality audio with the simplicity of USB connectivity, allowing recording direct to a computer without the need for an additional digital interface. Including an audiophile quality 18-bit resolution, 48kHz sampling A/D converter, the Podcaster processes all of the analogue-to-digital conversion internally, bypassing the computer's lower quality on-board sound controller altogether. A headphone output on the microphone body provides zero-latency monitoring, so the user can hear exactly what is being recorded, free of delay or echo. The Podcaster features an internal pop filter, designed to minimise plosives sounds that can overload the microphone capsule and distort the audio output. It is fully compatible with Windows 7, Windows 8 and Mac OS X computers, as well as several Linux distributions. The microphone is bus powered and features a status LED to indicate operation. The Podcaster is ideal for podcasting, vodcasting, YouTube videos, voice recognition software, corporate videos and any production application that requires a simple yet professional voice-over microphone. It is also a convenient demo microphone for musicians and songwriters that prefer the convenience of a USB microphone but don't want to compromise sound quality. It can also be used as an iPad microphone for the Apple iPad (in conjunction with the iPad Camera Connection Kit and a powered USB hub) to provide high quality recording to various iPad audio applications such as Garageband. The Podcaster includes a sturdy RM2 microphone ring mount. For professional applications the optional PSM1 shock mount and PSA1 boom arm are highly recommended. Comes with stand mount and 5m USB cable.                                                                                                                                    The RØDE Podcaster USB microphone is designed and made in Australia, and covered by RØDE Microphones' industry leading 10 year warranty.     *view and save downloads ( full specification) on the right


          あっきぃ より Raspberry Pi ZeroをUSBケーブル1本で遊ぶ へのコメント   
WindowsからSDカードを見た場合、bootはFATというMicrosoft由来のフォーマットであるため読み書きができます。 一方、rootはext4というLinuxのフォーマットでありWindowsが対応していないフォーマットになるため、読み書きはできません。 よって、Windowsでbootしかアクセスできないのはそういうもの、ということになります。 Windowsでext4を読むためのサードパーティ製ツールはいくつかあるようですが、動いたり動かなかったりするようです。 (ちなみに、ext4はMacも同じような対応状況です)
          あっきぃ より Raspberry Pi ZeroをUSBケーブル1本で遊ぶ へのコメント   
suisuiさん、こんにちは。 まず、PiZeroWについてですが、現在日本では技適の確認を待っている状態であるため 国内で使用されることを考えられているようでしたら、お控えいただくことをおすすめします。 https://www.raspberrypi.org/blog/pi-zero-distributors-annoucement/ (日本国内の販売店による情報は下記とおりです) https://www.switch-science.com/catalog/3200/ https://raspberry-pi.ksyic.com/main/index/pdp.id/219/pdp.open/219 SDカードのほうは、ディスクイメージの書き込み途中にPCを再起動されてしまったという認識で良いでしょうか。 もしそうであれば、中途半端に書き込まれて不完全なディスクになっているとかんがえられるので、 再度ディスクイメージを書いてみてはいかがでしょうか。 config.txt、cmdline.txtを編集中にPCを再起動した場合は、 普通は壊れたりしないかとおもうのでSDカード故障の可能性もありそうです。 こちらも再度ディスクイメージを書いてみて、だめなら買い替えてみるのも手かもしれません。 イメージの再書き込みは、Mac/Linuxであればddコマンドでそのまま実行すれば良いです。 が、心配であれば(Windowsの場合は私もよくわからないので)、 SDカードフォーマッターを使用してフォーマットしてみてください。 (パーティションも含めてきれいにしてくれます) https://www.sdcard.org/jp/downloads/formatter_4/
          ANDROID   
Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          untuk menangkal situs porno   

Untuk menangkal situs porno tidaklah sukar. Secara umum ada dua (2) teknik menangkal situs porno, yaitu:
• Memasang filter di PC pengguna.
• Memasang filter di server yang tersambung ke Internet.

Teknik yang pertama, memasang filter pada PC pengguna, biasanya dilakukan oleh para orang tua di PC di rumah agar anak-anak tidak melakukan surfing ke situs yang tidak di inginkan. Daftar lengkap filter maupun browser yang cocok untuk anak untuk aplikasi rumah tersebut dapat dilihat pada
http://www.yahooligans.com ? parent’s guide ? browser’s for kids.
http://www.yahooligans.com ? parent’s guide ? blocking and filtering.

Beberapa filter yang cukup terkenal seperti
Net Nanny, http://www.netnanny.com/
I Way Patrol, http://www.iwaypatrol.com/
Dll…

Tentunya teknik memfilter seperti ini hanya dapat dilakukan bagi orang tua di rumah kepada anak-nya yang belum begitu tahu Internet.Bagi sekolah yang terdapat fasilitas internet, tentunya teknik-teknik di atas sulit di terapkan. Cara paling effisien untuk menangkal situs porno adalah dengan memasang filter pada server proxy yang digunakan di WARNET / di kantor yang digunakan mengakses Internet secara bersama-sama dari sebuah Local Area Network (LAN).Teknik ke dua (2), memasang filter situs porno tidaklah sukar. Beberapa software komersial untuk melakukan filter konten, antara lain adalah:

http://www.cyberpatrol.com
http://www.websense.com
http://www.languard.com
http://www.n2h2.com
http://www.rulespace.com
http://www.surfcontrol.com
http://www.xdetect.com

Mungkin yang justru paling sukar adalah memperoleh daftar lengkap situs-situs yang perlu di blokir. Daftar tersebut diperlukan agar filter tahu situs mana saja yang perlu di blokir. Daftar ratusan ribu situs yang perlu di blokir dapat di ambil secara gratis, antara lain di:

http://www.squidguard.org/blacklist/
ftp://ftp.univ-tlse1.fr/pub/reseau/cache/squidguard_contrib/adult.tar.gz
ftp://ftp.univ-tlse1.fr/pub/reseau/cache/squidguard_contrib/publicite.tar.gz

bagi sekolah atu perkantoran, alternatif open source (Linux) mungkin menjadi menarik karena tidak membajak software. Pada Linux, salah satu software proxy yang paling populer adalah squid (http://www.squid-cache.org) yang biasanya dapat di install sekaligus bersamaan dengan instalasi Linux (baik Mandrake maupun RedHat).

Untuk melakukan proses filtering pada squid tidaklah sukar, kita cukup menambahkan beberapa kalimat pada file /etc/squid/squid.conf. Misalnya

acl sex url_regex "/etc/squid/sex"
acl notsex url_regex "/etc/squid/notsex"
http_access allow notsex
http_access deny sex

buatlah file /etc/squid/sex
/etc/squid/notsex

contoh isi /etc/squid/notsex:
.*.msexchange.*
.*.msexcel.*
.*freetown.*
.*geek-girls.*
.*scsext.*

contoh isi /etc/squid/sex:
.*.(praline|eroticworld|orion).de
.*.(theorgy|penthousemag|playboy|1stsex|lolita|sexpix|sexshop).*

www.indonona.com
www.exoticazza.com
www.dewasex.com
www.extrajos.com
www.bopekindo.com
www.sanggrahan.org
www.vicidi.com
www.17tahun.com
www.ceritaseru.org
www.ceritapanas.com

untuk memasukan daftar blacklist yang di peroleh dari squidguard dll, dapat dimasukan dengan mudah ke daftar di atas tampak di bawah ini adalah daftar Access Control List (ACL) di /etc/squid/squid.conf yang telah saya buat di server saya di rumah, yaitu:

acl sex url_regex "
/etc/squid/sex"
acl notsex url_regex "
/etc/squid/notsex"
acl aggressive url_regex "
/etc/squid/blacklists/aggressive/urls"
acl drugs url_regex "
/etc/squid/blacklists/drugs/urls"
acl porn url_regex "
/etc/squid/blacklists/porn/urls"
acl ads url_regex "
/etc/squid/blacklists/ads/urls"
acl audio-video url_regex "
/etc/squid/blacklists/audio-video/urls"
acl gambling url_regex "
/etc/squid/blacklists/gambling/urls"
acl warez url_regex "
/etc/squid/blacklists/warez/urls"
acl adult url_regex "
/etc/squid/adult/urls"
acl dom_adult dstdomain "
/etc/squid/adult/domains"
acl dom_aggressive dstdomain "
/etc/squid/blacklists/aggressive/domains"
acl dom_drugs dstdomain "
/etc/squid/blacklists/drugs/domains"
acl dom_porn dstdomain "
/etc/squid/blacklists/porn/domains"
acl dom_violence dstdomain "
/etc/squid/blacklists/violence/domains"
acl dom_ads dstdomain "
;/etc/squid/blacklists/ads/domains"
acl dom_audio-video dstdomain "
/etc/squid/blacklists/audio-video/domains"
acl dom_gambling dstdomain "
/etc/squid/blacklists/gambling/domains"
acl dom_proxy dstdomain "
/etc/squid/blacklists/proxy/domains"
acl dom_warez dstdomain "
/etc/squid/blacklists/warez/domains"

http_access deny sex
http_access deny adult
http_access deny aggressive
http_access deny drugs
http_access deny porn
http_access deny ads
http_access deny audio-video
http_access deny gambling
http_access deny warezhttp_access deny dom_adult
http_access deny dom_aggressive
http_access deny dom_drugs
http_access deny dom_porn
http_access deny dom_violence
http_access deny dom_ads
http_access deny dom_audio-video
http_access deny dom_gambling
http_access deny dom_proxy
http_access deny dom_warez

Dengan cara di atas, saya tidak hanya memblokir situs porno tapi juga situs yang berkaitan dengan drug, kekerasan, perjudian dll. Semua data ada pada file blacklist dari www.squidguard.org.

http://oke.or.id/2006/12/teknik-menangkal-situs-porno-dengan-squid-proxy/

Block Situs di Mikrotik Lewat Winbox


1.Buka winbox yang berada pada desktop.

2. Klik tando ( … ) atau isi alamat Mikrotik pada kolom Connect To:

3. Maka akan muncul gambar seperti di bawah ini, kemudian pilih salah satu.

4. Setelah itu isi Username dan Passwort Mikrotik

5. Kemudian klik tanda connect.

6. Dan akan terbuka jendela Mikrotik seoerti gambar di bawah ini.

7. Untuk block situs klik menu IP pilih Web Proxy

8. Kemudian Setting Web Proxy dengan mengeklik tombol Setting.

9. Maka akan muncul jendela seperti gambar di bawah ini.
Setting Web Proxy sepeti gambar di bawah ini, kemudian di klik tombol OK.

10. Sekarang kita mulai buat settingan website yang akan di block.Klik tanda ( + )
Maka akan muncul jendela, dan kemudia setting seperti gambar di bawah ini.

11. Kemudian klik OK, maka akan muncul catatan pada jendela Web Proxy.

12. Coba cek settingan tersebut dengan mengetikan kata “porno” pada google.

13. Dan kemudian enter, jika muncul tampilan seperti gambar di bawah ini maka settingan block situs Kamu berhasil.

http://www.tentang-komputer.co.cc/2010/01/blok-situs-porno-menggunakan-web-proxy.html
Diposkan oleh Diandra Ariesalva Blogs di 10:39 0 komentar
Reaksi: 

Kejahatan dunia maya (Inggris: cybercrime) adalah istilah yang mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer menjadi alat, sasaran atau tempat terjadinya kejahatan. Termasuk ke dalam kejahatan dunia maya antara lain adalah penipuan lelang secara online, pemalsuan cek, penipuan kartu kredit/carding, confidence fraud, penipuan identitas, pornografi anak, dll.

Walaupun kejahatan dunia maya atau cybercrime umumnya mengacu kepada aktivitas kejahatan dengan komputer atau jaringan komputer sebagai unsur utamanya, istilah ini juga digunakan untuk kegiatan kejahatan tradisional di mana komputer atau jaringan komputer digunakan untuk mempermudah atau memungkinkan kejahatan itu terjadi.

Contoh kejahatan dunia maya di mana komputer sebagai alat adalah spamming dan kejahatan terhadap hak cipta dan kekayaan intelektual. Contoh kejahatan dunia maya di mana komputer sebagai sasarannya adalah akses ilegal (mengelabui kontrol akses), malware dan serangan DoS. Contoh kejahatan dunia maya di mana komputer sebagai tempatnya adalah penipuan identitas. Sedangkan contoh kejahatan tradisional dengan komputer sebagai alatnya adalah pornografi anak dan judi online.

http://id.wikipedia.org/wiki/Kejahatan_dunia_maya



Pada pemilu 2004 lalu, ada sebuah kasus yang cukup mengegerkan dan memukul telak KPU sebagai institusi penyelenggara Pemilu. Tepatnya pada 17 April 2004 situs KPU diacak-acak oleh seseorang dimana nama-nama partai peserta pemilu diganti menjadi lucu-lucu namun data perolehan suara tidak dirubah. Pelaku pembobolan situs KPU ini dilakukan oleh seorang pemuda berumur 25 tahun bernama Dani Firmansyah, seorang mahasiswa Universitas Muhammadiyah Yogyakarta jurusan Hubungan Internasional.

Pihak Kepolisian pada awalnya kesulitan untuk melacak keberadaan pelaku terlebih kasus seperti ini adalah barang baru bagi Kepolisian. Pada awal penyelidikan Polisi sempat terkecoh karena pelaku membelokan alamat internet atau internet protocol (IP address) ke Thailand namun dengan usaha yang gigih, polisi berhasil meringkus tersangka ini setelah bekerjasama dengan beberapa pihak seperti Asosiasi Penyelenggara jasa Internet Indonesia (APJII) dan pihak penyedia jasa koneksi internet (ISP/Internet Service Provider).

Belakangan diketahui motif tersangka adalah untuk menunjukkan bahwa kinerja KPU sangat buruk terutama di bidang Teknologi Informasi, namun itu tidak bisa dibenarkan dan pelaku tetap diproses sesuai hukum yang berlaku.

          ANIMASI 3D   


Membuat 3D dengan Blender 3D
Membuat 3D dengan Blender 3D
Pusatgratis – Untuk semua pengunjung PG yang tertarik di dunia 3D modelling dan animasi.. Blender 3D adalah software gratis yang bisa anda gunakan untuk modeling, texuring, lighting, animasi dan video post processing 3 dimensi. Blender 3D yang merupakan software gratis dan open source ini merupakan open source 3D paling populer di dunia. Fitur Blender 3D tidak kalah dengan software 3D berharga mahal seperti 3D studio max, maya maupun XSI.
Dengan Blender 3D anda bisa membuat objek 3D animasi, media 3D interaktif, model dan bentuk 3D profesional, membuat objek game dan masih banyak lagi kreasi 3D lainnya.
Blender 3D memberikan fitur – fitur utama sebagai berikut :
1. interface yang user friendly dan tertata rapi.
2. tool untuk membuat objek 3D yang lengkap meliputi modeling, UV mapping, texturing, rigging, skinning, animasi, particle dan simulasi lainnya, scripting, rendering, compositing, post production dan game creation.
3. Cross Platform, dengan uniform GUI dan mendukung semua platform. Blender 3D bisa anda gunakan untuk semua versi windows, Linux, OS X, FreeBSD, Irix, Sun dan sistem operasi yang lainnya.
4. Kualitas arsitektur 3D yang berkualitas tinggi dan bisa dikerjakan dengan lebih cepat dan efisien.
5. Dukungan yang aktif melalui forum dan komunitas
6. File Berukuran kecil
7. dan tentu saja gratis
happy
Berikut ini beberapa screenshot gambar dan animasi 3 dimensi hasil desain dari Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar Hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Gambar hasil Desain Blender 3D
Hasil desain 3D oleh freeware Open Source Blender 3D terbukti tidak kalah dengan software 3D yang berharga mahal happy
Download Blender 3D dari situs resmi Blender 3D | License : Free, open Source | size : 13 – 24 Mb (tergantung sistem operasi anda) | Support : Semua versi Windows, Linux, OS X, FreeBSD, Irix, Sun dan beberapa sistem operasi yang lainnya.
Selamat mendesain

Stop Dreaming start Action (Mau mencoba dan berkarya)

Kata-kata stop dreaming start action mempengaruhi pikiranku, mungkin kita banyak bermimpi atau berkayal untuk mendapatkan sesuatu, mungkin cita-cita bisa selangkit harapan dan keinginan bisa tinggi setinggi gunung. Tapi apakah semua itu kita dapatkan yang kita harapakan tentu tidak karena masih ada tidakan yang harus kita lakukan, itu yang membuat kita susah untuk mendapatkan yang kita inginkan.
Perlu kita ingan didunia ini apa sih yang kita dapatkan dari bermimpi? mukin belum ada didunia ini orang yang kaya atau sukses hanya dengan duduk-duduk, tidur-tiduran atau bermalas -malasan, mungkin ada tapi hanya sedikit yang kita temukan atau bisa dibilang kaya dari warisan,atau terlahir dari anak orang kaya bisa juga suskse dari keberuntungan. Mimpinya orang film sinetron.
Saya banyak mendapatkan kiriman email dari Bapak Joko susilo yang banyak memberi masukan dan motivasi ilmu tentang hal kesuksesan, itu yang mempengaruhi pikiranku sekarang. Mungkin klop aja dari dunia ilmu yang saya geluti yaitu komputer terutama dibidang internet, banyak hal yang saya terima dari kiraman emailnya.yaitu tentang slogan blog harus mempunyai slogan penting. Jadi slogan saya di blog adalah tentang membuat animasi flash. Dimana saya dituntut untuk lebih bayak berkreatifitas dalam berkarya, ada juga email yang dikirimkan kepada saya tentang 6 cara menumbuhkan kreativitas. Saya sangat berterima kasih kepada bapak Joko susilo, walaupun karya saya tidak bagus tapi itu membuat saya berarti karena keinginan saya sejak kecil terwujud sekarang.
Kreativitas yang saya buat hanya untuk menyalurkan hobie, tapi setidaknya saya tidak bermimpi untuk bisa melakukannya walaupun saya tidak pintar menggambar. ungkin rekan - rekan semua bisa sama seperti saya mungkin ada bisa memwujudkan imipin anda walupun hanya sesaat. Hidup selalu penuh pengorbanan dan perjuangan dan kita pasti mendapatkan kemulian dari usahanya.


          Mau Membuat Warung Internet Sendiri?   

Teknologi Warung Internet harus di akui merupakan teknologi andalan dalam memberikan alternatif akses Internet yang murah bagi banyak orang. Secara umum ada dua (2) hal utama yang perlu diperhatikan dalam membangun Warung Internet, yaitu (1) sisi bisnis / manajemen WARNET dan (2) sisi teknologi WARNET. Saya sangat sarankan untuk ikut secara aktif monitoring mailing list yang berkaitan dengan per-warnet-an di Internet, seperti asosiasi-warnet@yahoogroups.com, asosiasi-warnet-broadband@yahoogroups.com, asosiasiwarnetbandung@yahoogroups.com, kowaba@yahoogroups.com dsb. Maupun berbagai mailing list yang berkaitan dengan teknologi informasi, seperti it-center@yahoogroups.com, linux-setup@linux.or.id, linux-admin@linux.or.id dsb.

Dari sisi bisnis / manajemen WARNET, isu yang ada biasanya berkisar masalah Marketing, Promosi, Usaha Memperoleh Pelanggan, Jangan pernah berkonsentrasi pada teknis karena pada akhirnya sense bisnis menjadi faktor penentu keberhasilan pendapatan anda.

Isu utama yang biasanya menjadi kunci adalah bisnis plan sebuah WARNET, sebagian besar orang sangat gamang karena belum / tidak mempunyai patokan yang pasti tentang bisnis plan WARNET. Sebetulnya saya sudah pernah membuatkan bisnis plan WARNET dalam bentuk file Excel yang dapat di ubah-ubah angka-nya di sesuaikan dengan kondisi di lapangan. File bisnis plan tersebut maupun berbagai informasi tentang WARNET bisa di ambil secara cuma-cuma di http://www.detik.com/net/kolom-warnet/ dan http://www.bogor.net/idkf/fisik/warung-internet/. Komponen dalam bisnis plan tidak banyak sebetulnya, yaitu (1) investasi yang biasanya sekitar Rp. 4 juta-an / PC, (2) biaya operasional yang biasanya akan banyak di habiskan dengan membayar Telkom & ISP selain membayar pegawai, listrik dsb. dan (3) tarif.

Bagi anda yang bisa menghayati usaha WARNET maka sebetulnya sense bisnis-nya sederhana sekali, yaitu:

·         Letakan WARNET di tempat yang banyak orang, terutama orang muda.
·         Semakin banyak komputer yang digunakan, semakin murah tarif, semakin cepat balik modal.
·         Batas minimal komputer yang feasible untuk akses Web sekitar 5 komputer jika menggunakan sambungan telepon dial-up.

Khusus untuk WARNET di sekolah-sekolah / di universitas yang kecil, ada beberapa tip sederhana yang perlu diperhatikan juga, yaitu:

·         Kalau bisa siswa membayar langsung dari SPP.
·         Melibatkan siswa dalam pengelolaan WARNET supaya ada rasa memiliki diantara siswa selain membantu siswa belajar teknologi lebih dalam.
·         Tutup akses Web & chatting, gunakan e-mail sebagai media komunikasi utama di Internet. Biasanya sekolah tidak sadar bahwa akses Web akan menghabiskan pulsa paling banyak & paling mahal untuk membayar Telkom.

Sebagai gambaran umum, untuk sekolah dengan siswa 700 orang, sebagai gambaran umum biaya yang perlu di keluarkan adalah sekitar Rp. 40 juta-an untuk fasilitas WARNET dengan 10 buah komputer. Kelihatannya mahal, padahal tidak sama sekali. Biaya akses e-mail per bulan yang perlu dikeluarkan untuk Telkom & ISP sekitar Rp. 500.000. Akses Web & Chatting sengaja di tutup jika kepala sekolah, guru & yayasan tidak yakin bahwa siswa mampu menjaga diri-nya melakukan surfing di Internet untuk tidak masuk ke situs yang tidak baik.

Modal akan kembali dalam waktu kurang dari satu tahun, jika setiap siswa bersedia untuk membayar tambahan SPP Rp. 3000 / bulan / siswa. Sebuah investasi yang kecil & kemungkinan kembali yang pasti. Di samping itu, biaya Rp. 3000 / siswa / bulan sebetulnya termasuk sedikit sekali, praktis cukup dapat dipenuhi dengan ceria oleh para siswa. Mohon diperhatikan bahwa untuk Rp. 3000 / bulan / siswa ini para siswa hanya dapat menikmati fasilitas e-mail di Interet. Sedangkan Web harus membayar tambahan sekitar Rp. 3500-5000 / jam seperti WARNET biasa, atau jika operator WARNET cukup kreatif, dapat juga sebagian Web di akses melalui CD-ROM lokal. Selanjutnya jika sudah lebih dari satu tahun & sudah balik modal, keuntungan yang diperoleh bisa terus dikembangkan untuk membeli komputer tambahan untuk fasilitas akses Internet agar lebih baik & mudah bagi para siswa.

Dari sisi teknologi Warung Internet, ada beberapa isu besar yang perlu diperhatikan. Secara umum topologi sederhana sebuah WARNET konvensional tampak pada gambar. WARNET terdiri dari sebuah Local Area Network (LAN) yang tersambung ke Internet melalui sebuah gateway yang kadang-kadang juga berfungsi sebagai server. Pada tingkat yang lebih kompleks, konsep ini dapat dikembangkan dengan mudah untuk menyambungkan jaringan di kompleks perumahan (untuk membuat RT/RW-net), di kompleks pertokoan, kompleks perkantoran, di sekolah-sekolah, di kampus-kampus yang pada akhirnya memungkinkan orang Indonesia untuk akses ke Internet dengan biaya yang sangat murah.

Teknik Local Area Network (LAN) relatif cukup mapan, biasanya WARNET menggunakan topologi Bus dengan ethernet 100BaseT yang murah di pasaran pada kecepatan 100Mbps di LAN. Ethernet Card yang baik bisa diperoleh sekitar Rp. 150.000 / card sebaiknya jangan membeli yang murah sekali (sekitar Rp. 70.000-an) karena biasanya jelek. Pada Kampus Network, Kompleks Perumahan, Kompleks Perkantoran tentunya teknik yang digunakan semakin kompleks.

Sambungan jarak jauh dari WARNET ke ISP maupun ke Internet internasional mempunyai variasi yang cukup banyak. Teknik yang paling murah adalah menggunakan dial-up melalui kabel telepon milik Telkom ke ISP lokal. Teknik sederhana ini persis sama dengan pola yang kita pakai untuk akses internet di rumah-rumah. Kecepatan maksimum di Indonesia umumnya sekitar 33.6Kbps karena jaringan Telkom yang ada kualitasnya tidak baik. Biaya akses per bulan yang dikeluarkan bagi WARNET yang cukup aktif menggunakan saluran dial-up ini adalah sekitar Rp. 2.5 juta / bulan dengan kecepatan akses 33.6Kbps yang akhirnya membatasi jumlah komputer yang bisa akses beramai-ramai sekitar 7-10 komputer saja.

Bagi sebagian WARNET, kompleks perkantoran, kompleks perumahan yang lebih maju umumnya mereka tidak lagi menggunakan Telkom karena selain kualitas jaringannnya kurang baik, kecepatan akses-nya juga lambat yang semua mengurangi profit WARNET. Teman-teman yang lebih maju ini umumnya menggunakan peralatan microwave buatan sendiri di frekuensi 2.4GHz, beberapa bereksperimen juga di frekuensi 5.8GHz yang terakhir ada beberapa rekan yang juga mencoba menggunakan gelombang cahaya infra merah di antara beberapa gedung di Jakarta. Kecepatan pengiriman dapat menggunakan peralatan microwave / cahaya infra merah ini cukup menakjubkan sekitar 2-11Mbps yang jauh lebih tinggi dibandingkan Telkom. Jika di paksakan menggunakan gelombang infra merah dapat mencapai kecepatan 155Mbps.

Rekan-rekan WARNET di Bandung, saat ini cukup nekad karena membangun sendiri stasiun bumi untuk akses ke Internet secara langsung pada kecepatan 1Mbps yang kemudian di distribusikan menggunakan microwave di 2.4GHz pada kecepatan 11Mbps. Seluruh infrastruktur menjadi beban biaya sekitar beberapa puluh WARNET sekitar Rp. 4-5.5 juta / bulan / WARNET. Cukup murah mengingat kecepatan yang bisa di capai adalah 1Mbps ke Internet dan 11Mbps antar WARNET.

Pada kesempatan lain akan saya coba jelaskan lagi lebih detail teknologi Warung Internet tersebut. Saya sarankan kalau bisa aktif di berbagai mailing list WARNET maupun mengambil informasi tentang WARNET di http://www.bogor.net/idkf yang gratisan.

          Pangu 1.2.1 Jailbreak Download for Windows [iOS 7.1 / 7.1.1 / 7.1.2 Support]   

Soon after Pangu 1.2 update for iOS 7.1.x jailbreak was launched, the guys from China release another upgrade. You can now download Pangu jailbreak 1.2.1 for Windows computers and get fixes to some previous bugs.

Pangu is the only jailbreak for iOS 7.1 and up to 7.1.2 that exists today. It supports iPhone, iPod touch and iPad and works on Mac and Windows [different versions for different platforms]. You can easily jailbreak iPhone iOS 7.1.2 and two previous firmwares with it as the program does everything mostly automatically.

The main difference between Pangu jailbreak and Evasi0n is that you have to change the date on your iDevice while unthethering iOS 7.1.1, 7.1.2 or 7.1 with Pangu as otherwise this program will fail to work. Users have to set up June 2 as their smartphone or tablet date.

Pangu jailbreak 1.2.1 download is available at the tool’s official site [http://en.pangu.io/]. Unlike the preceding Mac update Pangu 1.2.0 which solve different issues [boot loop, sandbox log etc.], the new 1.2.1 version fixes only crash problem on PC.

Jailbreak Pangu 1.2.1 Download and Installation


Step 1. Get the newest iOS 7.1.x jailbreak for Windows.

Step 2. Once you run the jailbreak, you should follow the on-screen guide and remember to set June 2, 2014 as your date to succeed.

Step 3. Allow the program finish jailbreaking and reboot.

You can enjoy staying jailbroken on iOS 7.1.x firmware with Cydia available on your Home screen where tons of jailbreak apps, tweaks and hacks can be obtained for better iDevice experience.

P.S. If you didn’t have problems with this program on Windows, there is no need to re-jailbreak with untethered Pangu jailbreak 1.2.1.

          iOS 7.1.1 Jailbreak Bootloop: How to Solve the Problem   

The only iOS 7.1.1 jailbreak available today for iPhone, iPad and iPod touch devices have been launched by hackers from China. The team behind this exploit has created programs to support Windows and Mac. The interface is now available in English to support foreign users who wish to jailbreak their iDevices but don’t want to see everything in Chinese.

The first version of Pangu used the exploit found by i0n1c who isn’t happy about this. As he tweeted recently, it is surely “no way okay” when someone presents a public jailbreak with the vulnerability found by Stefan Esser also known as i0n1c. This well-known hacker revealed his exploits to students at one of the training courses for hackers. But now the Chinese team assures they have swapped out the holes discovered by Esser and moved to new exploits.


The version 1.1 of the new Pangu 7.1.1 jailbreak untethered seems to burn at least two exploits. As Esser notes, they are simply “wasting ever more bugs on something unimportant” as the minor update with Mac support can only offer untethered 7.1.1 jailbreak and nothing more.

While the jailbreaking community doesn’t like what Chinese hackers did some users take advantage of using iOS 7.1.1 exploit and jailbreaking their iDevice. The question is what users will do after Apple officially releases its next big iOS 8 update with most exploits killed right now? Will hackers create a new program or update their old one to support the untethered jailbreaking of this next firmware?

It doesn’t look like Chinese guys want to save their vulnerabilities for new versions. What do you think about their release and possible plans for the future? Do you believe they will find more holes in the mobile operating system?
          iOS 7.1.1 Jailbreak Pangu for Linux and English Version Coming Soon   
Chinese developers promise to update their untethered 7.1.1 jailbreak Pangu with English version and plan to release Linux build.

The first jailbreak 7.1.1 with English version is coming soon. This has been confirmed by Pangu hackers from China who have released their Pangu exploit for 7.1.1 firmware versions earlier this week.

The program now supports only Windows PC but hackers assure that their iOS 7.1.1 jailbreak pangu for Linux should also be launched this year. The news was shared through Weibo hackers’ account [this service is similar to Twitter]. They will also present Mac version of the jailbreaking program which is great as more users will be able to perform jailbreak on their iPod touch, iPad and iPhone.



iOS 7.1.1 jailbreak pangu English version should be available for all platforms meaning you’ll understand everything yourself no matter if you are jailbreaking on Mac, PC or Linux computer. Right now a lot of screens of the program are in Chinese or get question marks instead of letters.

Users who were the first to download Pangu untethered 7.1.1 jailbreak met the 80M sized file because hackers didn’t compress the Cydia package trying to release the exploit for public as soon as possible. They promise to solve the issue for next updates.

You can try jailbreaking right away or keep waiting till other famous and now trusted hackers present their versions of jailbreak for firmware 7.1.x.

          Junior/Senior Drupal Software Developer - Acro Media Inc. - Okanagan, BC   
Some understanding of Linux hosting if at all possible. Hopefully, you have installed Linux on something at least once, even if it was just your Xbox, phone.....
From Acro Media Inc. - Wed, 12 Apr 2017 12:28:59 GMT - View all Okanagan, BC jobs
          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          Embedded Electrical Engineer - Littelfuse - Saskatchewan   
Real Time Operating System (RTOS) experience such as FreeRTOS, MQX, or Embedded Linux. If you are motivated to succeed and can see yourself in this role, please...
From Littelfuse - Thu, 15 Jun 2017 23:12:35 GMT - View all Saskatchewan jobs
          INFRASTRUCTURE SPECIALIST - A.M. Fredericks - Ontario   
3 – 5 years managing Linux servers. Ability to build and configure servers from the ground up, both Linux &amp; Windows....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:14 GMT - View all Ontario jobs
          DEVOPS PROGRAMMER - A.M. Fredericks - Ontario   
Proficient in LINUX (Python/BASH scripting/CRON). Our company looking to automate core functionality of the business....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:08 GMT - View all Ontario jobs
          Software engineer - Stratoscale - Ontario   
Highly proficient in a Linux environment. Our teams are spread around the globe but still work closely together in a fast-paced agile environment....
From Stratoscale - Thu, 15 Jun 2017 16:47:38 GMT - View all Ontario jobs
          Solution Architects - Managed Services - OnX Enterprise Solutions - Ontario   
Working knowledge of both Windows and Linux operating systems. OnX is a privately held company that is growing internationally....
From OnX Enterprise Solutions - Mon, 12 Jun 2017 20:52:05 GMT - View all Ontario jobs
          IT Lead, Server Support - Toronto Hydro - Ontario   
Knowledge of server and storage systems such as Red Hat Enterprise Linux, Windows Server 2008 and 2012 Operating Systems, Active Directory, Virtual Desktop...
From Toronto Hydro - Mon, 12 Jun 2017 19:52:14 GMT - View all Ontario jobs
          Senior EPM Solutions Architect - AstralTech - Ontario   
Solid understanding of Linux and Microsoft Windows Server operating system. Senior EPM Solutions Architect....
From AstralTech - Sat, 13 May 2017 08:40:54 GMT - View all Ontario jobs
          DÉVELOPPEUR JAVA FULL-STACK (H/F) - ON-X - Ontario   
Système d'exploitation/ virtualisation ( Linux, Windows, VMware, Docker ). En qualité de Développeur Full-Stack , vous réaliserez les missions suivantes :....
From ON-X - Thu, 06 Apr 2017 07:41:27 GMT - View all Ontario jobs
          Développeur Java - Linux/Android (H/F) Expertise technique - ON-X - Ontario   
Linux et Android. Vous aurez pour mission le développement et la conception des applications Android....
From ON-X - Wed, 05 Apr 2017 07:39:29 GMT - View all Ontario jobs
          Développeurs - Intégrateurs techniques Java J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Vous serez en charge du développement de solutions innovantes d'authenfication forte....
From ON-X - Sun, 02 Apr 2017 07:26:34 GMT - View all Ontario jobs
          Développeur J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Dans le cadre de notre développement, nous sommes à la recherche d'un(e) Ingénieur Etudes et Développement JAVA J2EE....
From ON-X - Sun, 02 Apr 2017 07:26:32 GMT - View all Ontario jobs
          Product Verification Engineer - Evertz - Ontario   
Proficient with Linux and high level programming or scripting languages such as Python. We are expanding our Product Verification team and looking for...
From Evertz - Fri, 24 Mar 2017 05:42:40 GMT - View all Ontario jobs
          Customer Success Engineer - Stratoscale - Ontario   
Extensive experience troubleshooting remote Linux system issues. As a Customer Success Engineer, you will be providing technical assistance to Stratoscale...
From Stratoscale - Thu, 09 Mar 2017 21:32:53 GMT - View all Ontario jobs
          Cloud (AWS) Engineer - FOUND PEOPLE INC. - Ontario   
Services, Cloudwatch, Cloudformation, CloudTrail, Config, IAM, WAF, Cognito, API Gateway, SQS, SNS, SESHighly proficient with Linux server operating systems and...
From FOUND PEOPLE INC. - Thu, 11 May 2017 09:52:15 GMT - View all Ontario jobs
          How to Install The Latest Mesa Version On Debian 9 Stretch Linux   
Mesa is a big deal if you're running open source graphics drivers. It can be the difference between a smooth experience and an awful one. Mesa is under active development, and it sees constant noticeable performance improvements. That means it's really worthwhile to stay on top of the latest releases.
          CrossOver for Android Lets You Run Windows Apps on Intel-Based Chromebooks   
CodeWeavers?, the commercial company behind the well-known CrossOver for Linux and Mac application that lets users install and run Windows apps and games is still working to release an Android version.
          Extreme Tux Racer 0.7.4   
One of the major games for Linux is ‘Extreme Tux Racer’, available for Android, Linux, Microsoft Windows, Macintosh operating systems and Ubuntu Touch. The goal of the game is to control Tux, or another chosen character, to get to the bottom of the hill. The character will slide down the hill of snow and ice on his belly. Along the way you can pick up herring.
          JUNIOR IT ASSISTANT FOR VFX COMPANY - Goldtooth Creative Agency Inc. - Vancouver, BC   
Experience in Windows and or Linux Networking, including Active Directory, DHCP, DNS. Disaster Recovery planning, implementation and testing....
From Indeed - Tue, 27 Jun 2017 20:13:00 GMT - View all Vancouver, BC jobs
          Commentaires sur GNU/Linux RHEL4.4 (Nahant) administration système (vsftpd, vncserver, stty, import/export bdd, etc…) par jmax   
merci pour l'astuce du localtime j'avais un mauvais affichage des heures et dates et du coup, ça a tout réglé :-)
          Fortune Not Found   

This week:

James Bond
David Drake
Linux Powered Rifle

Music for the show provided by Reed Love.


          Auf Chromium-Basis: Opera 26 stabil für Linux   
Der Opera-Browser auf Chromium-Basis steht nun auch stabil für Linux bereit. Zudem dürfen Distributionen die Binärdateien neu packen und selbst weiterreichen. Die Funktionen der Browser sind unter Windows, Mac OS X und Linux die gleichen. (Opera, Browser)
          Docker / VMware Network Configuration by J666GAK   
I need to run a lab to show something. I want to use VMware with a Windows 7 client image, and then Kali Linux in Docker for Windows. But I obviously still need network connectivity between the VMs in both VMware and Docker... (Budget: $10 - $30 USD, Jobs: Linux, Network Administration)
          By: PhillyMuscle   
I am forced to use the ncurses based YaST mostly because of all the repositories I subscribe too (almost all of the buildservice ones). I write code and work in Monodevelop (I know, evil right?), and I'm a heavy VM user since I'm in a Windows shop at work that allows me to use Linux to develop Win software [but I work on Java/SAP stuff]. So, when I need to grab a simple package really fast and install it to do something, say... Scribus... it takes me almost 20 minutes because it has to refresh every stinkin repository. Oh and worse, if you do a base OpenSUSE install, say 10.3, and then you need to update the system before you start installing your stuff because you want all the all the latest pataches... there is nothing equivalent to "make -j" so you can take full advantage of your quad-cpu-dual-core multithreaded high-bandwidth goodness. Nope, you gotta download everything one package, slowly, at a time. Grr. I hope OpenSUSE 11 addresses this. And more... like all the pain of trying to use java on x86_64 vs i386.
          今年の振り返り   

リアルの話は別記事にまとめます。

リアルとネットが混ざったところと言えば映画の話なので、そこらへんについてこれから書こう。

ここら辺の話。

今年書いたブログの振り返り 2016年版 - klimの独り言

あまりここやTwitterにリアルの話は書かないようにしていたのだけれど、少しくらいならいいかな。

職能の話

上の記事から分かる通り、そういう道に飛び込みました。来年から専門的な内容のタスクが振られる予定なので、今からポチポチ勉強し直している所。あー、力がほしーい。

これから勉強を続けるにあたって、本能のままに未知の分野に飛び込むと器用貧乏感が拭えなくなるので、勉強する分野を絞ることは大事なのかもしれない。以下のQiitaにも書かれていることではあるけれど。

一口にプログラミングと言っても、学ぶべき技術分野は多岐に渡ります。

Webに話を限定したとしても、

  • HTML,CSS,JavaScript等のクライアントサイドプログラミング
  • Ruby on Rails等のサーバーサイドプログラミング
  • MySQL等のデータベースの知識
  • Linux,Apache等のサーバー構築に関する知識
  • ソフトウェアテストやGit等のバージョン管理等などのツールや手法の知識

などなど、数え上げたら学習すべき知識はキリがありません。時間は有限で、全ての技術を学習することなど土台不可能です。

意識的に自分にとって必要な技術分野を絞る事は、プログラミング勉強を加速させる上で大切な習慣なのです。

プログラミング勉強を加速させる7つの習慣 - Qiita

ただ、自分自身あまり集中力が持続する方ではないので、ふらふらと未知の技術に時間を費やすこともある(Dockerとかそんな感じで学習した)。やっぱり考えて勉強しないと駄目ね。


旅行の話

今までは根っからのインドア派だったので、金と時間をかけて遠くにいくことはしなかった。

でも、GWに東京・大洗に1週間(ガルパンのため)、9月に東京まで弾丸で(声優の故松来未祐さんのイベントのため)、秋に仙台に一泊二日(ビール工場とウイスキーの蒸溜所へ)と、今までの人生の中で一番遠出してたと思う。

秋田を起点にすると東京か仙台か…と2極化しがちな感じがするので、これからはもうちょっと中々行かないような場所に行ってみたい。札幌か?

映画の話

今までの人生、ネット・DVD・TV等で映画を見ることはあれど、映画館に行くことは稀だった。年に1回行けば多い方という感じで、劇場に行くことは本当に珍しかった。

しかし、ガルパン劇場版を7,8回見てからと言うもの、映画館に足を運ぶことに抵抗がなくなったため、これを機に色々と映画を見てみることにした。この「Mr.ホームズ 名探偵最後の事件」を見に行ったのも、今まで見たことのない映画を映画館で見てみようと思ったからだった。

あと、今年になって本当に映画が好きだという人と出会ったのも大きな要因だった。その人のお陰で世界の幅は広がったし、今までの自分だと見ようともしないようなものも見るようになった。そんな人からオススメされたのが以下のDVDだった。

感想等はリンク先で書いてあるからここでは書かないけれど、これで心を動かされたのは「これオススメだから」と言って円盤を渡されるという行為自体が初めてだったので、その人が何故オススメをしてきたのかを考えながら映画を見るという点が新鮮だったからだった。自分が好きなポイントを探すだけでなく、その人が心を惹かれた場所は何処なんだろうと考えながら作品を見るのは今までにはなかったと思う。*1


まとめ

来年は今年の頑張りや新しく始めたことを拡張して、更に新しいことに手を広げられれば…と思っている。そうしつつ、今年始めたことを引き続き続けられれば…とも。

*1:ちなみに、上の文章はその人に直に見せたのだった。「あーここだよねー!」と言ってくれたのは本当に嬉しかった。


          Linux-driven Cortex-A7 SoC targets 3D and HD-enabled HMI   
Renesas unveiled a “RZ/G1C” SoC for HMI applications with 1x or 2x 1GHz Cortex-A7 cores and a PowerVR SGX531 GPU, plus a Linux-supported starter kit. The RZ/G1C SoC joins other similarly Yocto Project supported Renesas RZ/G SoCs such as the dual-core RZ/G1E and RZ/G1M with 1GHz Cortex-A7 and 1.5GHz Cortex-A15 cores, respectively. There’s also an […]
          LPIC-1 Training and Discounted LPIC Exams at the Indy Linux Fest!   

Nothing that exciting happens around where I live. I have to go to Philly for some type of event, and considering I live 4 minutes from work, driving an hour to Philly is a lot for me :p (seriously though, I fall asleep while driving :p)


          LPIC-1 Training and Discounted LPIC Exams at the Indy Linux Fest!   

dehcbad25 wrote:

I wish I was around there. Those are great prices.

For real!


          LPIC-1 Training and Discounted LPIC Exams at the Indy Linux Fest!   

 

dehcbad25 wrote:

I wish I was around there. Those are great prices.

Yeah it is, I'm not sure what the normal training price is but the tests themselves are normally $175 each.


          LPIC-1 Training and Discounted LPIC Exams at the Indy Linux Fest!   

I wish I was around there. Those are great prices.


          LPIC-1 Training and Discounted LPIC Exams at the Indy Linux Fest!   

For anyone that lives in or around the Indy area that wants to get their LPIC, check this out:

LPIC-1 Exam Cram from the LPIC ($99):
http://www.indianalinux.org/cms/lpi_exam_cram

Discounted LPIC exams (most are $99) & BSD Exams:
http://www.indianalinux.org/cms/certification2011

This topic first appeared in the Spiceworks Community
          Sitebytes migreert naar Redhat Enterprise   
Sitebytes is bezig alle webhosting - en dedicated server abonnementen om te zetten naar Redhat Enterprise Linux.
          Comment on Add or Change Keyboard Shortcuts in Linux Mint by sebasporto   
Hi, do you know how to change the short cuts for copy and paste?
          Comment on Add or Change Keyboard Shortcuts in Linux Mint by Paul Bolger   
Really dumb question, but how would you set a custom shortcut to send a single keystroke? I need to map the multimedia keys on my keyboard to regular keypresses like "<" to send to a video editing program. I've tried "echo <" but that doesn't seem to simulate a keystroke.
          Manager, IAAS Cloud Infrastructure DevOps / Request Technology - Stephanie Baker / Chicago, IL   
Request Technology - Stephanie Baker/Chicago, IL

Prestigious Professional Services Organization is seeking multiple Cloud Infrastructure DevOps Specialists. Candidate will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs.

Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure - including Servers, databases, storage devices, etc. - that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within.

Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace.

Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: DOE + bonus

Apply To Job
          Cloud and DevOps Infrastructure Professional or Manager / Request Technology - Anthony Honquest / Columbus, OH   
Request Technology - Anthony Honquest/Columbus, OH

Cloud and DevOps Infrastructure Professional or Manager

*Managers must have experience leading teams on Client engagements*

*Must have a Bachelors Degree, and be open to travel Monday thru Thursday each week*

Prestigious Global Company is currently looking for Cloud and DevOps Infrastructure Professionals at the Manager and Consultant level. Individual will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure - including Servers, databases, storage devices, etc. - that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time

Pay: $100,000 to $200,000 USD
Pay Period: Annual
Other Pay Info: Bonus

Apply To Job
          Cloud and DevOps Infrastructure Manager / Request Technology - Anthony Honquest / San Jose, CA   
Request Technology - Anthony Honquest/San Jose, CA

*Need to be located in the San Francisco or San Jose area*

*Must have a Bachelor's Degree*

.*Should be working in the Bay area for the first year or so and then after that need to be open to travel Monday through Thursday each week*

Prestigious Global Company is currently looking for Cloud and DevOps Infrastructure Professionals at the Manager and Consultant level. Individual will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure - including Servers, databases, storage devices, etc. - that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time

Pay: $150,000 to $200,000 USD
Pay Period: Annual
Other Pay Info: bonus

Apply To Job
          Cloud and DevOps Infrastructure Professional or Manager / Request Technology - Anthony Honquest / Philadelphia, PA   
Request Technology - Anthony Honquest/Philadelphia, PA

Cloud and DevOps Infrastructure Professional or Manager

*Managers must have experience leading teams on Client engagements*

*Must have a Bachelors Degree, and be open to travel Monday thru Thursday each week*

Prestigious Global Company is currently looking for Cloud and DevOps Infrastructure Professionals at the Manager and Consultant level. Individual will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure - including Servers, databases, storage devices, etc. - that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time

Pay: $100,000 to $200,000 USD
Pay Period: Annual
Other Pay Info: Bonus

Apply To Job
          Cloud Infrastructure DevOps Specialist / Request Technology - Craig Johnson / Reston, VA   
Request Technology - Craig Johnson/Reston, VA

*Must have a Bachelors Degree, and be open to travel Monday thru Thursday each week*

Prestigious Global Company is currently seeking Cloud Infrastructure DevOps Specialists at all levels. Candidate will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure -- including Servers, databases, storage devices, etc. -- that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: Open + Bonus

Apply To Job
          Cloud Infrastructure DevOps Specialist / Request Technology - Craig Johnson / New York, NY   
Request Technology - Craig Johnson/New York, NY

*Must have a Bachelors Degree, and be open to travel Monday thru Thursday each week*

Prestigious Global Company is currently seeking Cloud Infrastructure DevOps Specialists at all levels. Candidate will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure -- including Servers, databases, storage devices, etc. -- that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: Open + Bonus

Apply To Job
          Cloud and DevOps Infrastructure Professional or Manager / Request Technology - Anthony Honquest / New York, NY   
Request Technology - Anthony Honquest/New York, NY

Cloud and DevOps Infrastructure Professional or Manager

*Managers must have experience leading teams on Client engagements*

*Must have a Bachelors Degree, and be open to travel Monday thru Thursday each week*

Prestigious Global Company is currently looking for Cloud and DevOps Infrastructure Professionals at the Manager and Consultant level. Individual will act as a Subject Matter Expert in area of expertise and support clients to design, build and run an agile, scalable, standardized and optimal IT infrastructure to deliver operational efficiencies, drive workplace productivity and meet dynamic business needs. Apply infrastructure improvements and contribute to assets/offerings and thought leadership. Enhances marketplace reputation. This role works with companies to make them more productive, mobile, and collaborative by using cloud computing to get their work done. You drive business relationships by analysing business performance, identifying methods to improve or scale Legacy cloud applications, and pitch ideas to identify opportunity to deliver results. You will be asked to drive business relationships with customers and establish technical integrations and set clear expectations. You will be key in managing these projects throughout the life from business planning to cloud migration, to customer adoption and client benefit. Your depth of experience in the cloud ISV ecosystem will give you first-hand insights into the vendors and industry trends that will enable your success in driving associated partnerships and deliver technical solutions for the benefit of our customers. You would have a passion for building technology-driven partnerships with strong business potential that can be measured by the customer adoption of the solutions you and the teams create, and seeking daily engagement, you'll provide seamless supporting extending the client preference. We are focused on cloud, Legacy application migration expertise, and a high performance delivered solution to delight our customers.

Responsibilities:

Eagerness to participate on a team designing, building and/or testing cloud infrastructure focused solutions.

Develop and deliver strategies and solutions for enterprise infrastructure - including Servers, databases, storage devices, etc. - that handle the data necessary for enterprise operations.

Thoroughly document all facets of the infrastructure solution.

Apply methodology, reusable assets, and previous work experience to delivery consistently high quality work.

Deliver written or oral status reports regularly.

Stay educated on new and emerging market offerings that may be of interest to our clients.

Depth of expertise in Cloud ISV ecosystems

Extensive travel may be required.

Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems.

Understand the strategic direction set by senior management as it relates to team goals.

Use considerable judgment to define solution and seeks guidance on complex problems.

Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within. Establish methods and procedures on new assignments with guidance.

Manage small teams and/or work efforts (if in an independent contributor role) at a client or within.

Establish a deep understanding of customer needs for your area and leverage those insights to champion partner integrations with our technologies for maximum partner and customer success

Develop partner strategies, strategic account plans, and key executive relationships, including growth opportunities, action planning, and business growth forecasting.

Qualifications:

Minimum of a Bachelor's degree.

Minimum of 4 years of experience in all aspects of cloud computing (infrastructure, storage, platforms and data), as well as cloud market, competitive dynamics and customer buying behavior.

Minimum of 4 years of experience in partner management or business development.

Minimum of 4 years of technical architecture design, evaluation, and investigation.

Minimum of 4 years of Project Management experience (Project and Resource planning using MS Project).

Minimum of 4 years of professional experience in 5 of the following 8 areas:

Server Operating Systems (eg Microsoft Windows, Unix, Linux, etc.).

Virtualization Platforms (eg VMWare, Hyper-V, etc.).

Cloud Computing and Storage, Cloud-Based Technologies (AWS, Azure, GCP)

Workload Migration Automation Tools (Double-Take, Racemi, etc.)

Cloud Management Platforms (vRealize, Gravitant, etc.)

Infrastructure provisioning and management (Puppet, Chef, Ansible)

Data center infrastructure expertise (Equinix, Teradata)

Platform- and infrastructure-as-a-service markets

Preferred Skills:

Google Cloud Platform certification

Previous Consulting or client service delivery experience.

Infrastructure (Server, Storage, and Database) discovery, design, build, and migration experience.

Experience with private and public cloud architectures, pros/cons, and migration considerations.

Architectural exposure to Windows, Linux, UNIX, VMware, Hyper-V, XenServer, Oracle, DB2, SQL Server, IIS Server, SAN, NAS, VCE/FlexPod, and other technologies.

Hands-on experience with VBScript, TCP/IP, XML, C++, JavaScript.

Technical/Team Leadership Experience.

Personnel Development Experience (hiring, resource planning, performance feedback, etc.).

Adapts existing methods and procedures to create possible alternative solutions to moderately complex problems.

Experience with Strategy Development, Partnership Management or Business Development experience in high technology.

Deep experience in the development a dev ops technology space.

Background in software engineering or product management.

Success managing and building strong working relationships with cross-functional teams internally and externally with executives at partner organizations.

Proven ability to plan and manage at both the strategic and operational level and to launch new products successfully in the marketplace. Demonstrated success at working with cross-functional teams and building strong relationships across departments.

Ability to solve problems quickly and resourcefully with excellent communication and presentation skills.

Employment Type: Permanent
Work Hours: Full Time

Pay: $100,000 to $200,000 USD
Pay Period: Annual
Other Pay Info: Bonus

Apply To Job
          Update On A Plane   
Just checking in with a small update. You've probably already guessed that we're not going to have a show this week, which is largely my fault as I didn't anticipate that my traveling would wipe me out so much. We should be back as usual next week, but I'll be at CES in Las Vegas this week so we may have a push there too. My trip back to NY was a great time, got some relaxation in and also managed to hit the most fantastic Chinese Food place on the planet, which I always make time for when I'm back east. I actually made sure to post a very favorable review of the place on my Yelp! page. Yelp! is an awesome little site that has some brutally honest reviews of restaurants, stores and other stuff. I've found that while I have no tolerance for generic social networking sites like MySpace or Facebook, more specific social sites have a lot to offer.

On to the trip, my wife and I flew the new Virgin America airline and it was a very interesting experience all around, both good and bad. Let’s start with the bad points, since I prefer ending on better notes when possible. I originally booked our flight on July 7, a full 5 months and 16 days before we were to fly out. I always book long flights way ahead to ensure I can get an emergency seat, since I have a rather severe terror of flying, I get claustrophobic in small spaces after extended periods and I’m about 6’2” with a need for legroom.. Virgin America offers an option to pay $25 extra for “premium seating,” which the emergency row is considered and I gladly paid the extra money for the comfort. I received an email confirmation, and all seemed well. In early November there was a message left on our answering machine informing us that the flight time was moved up much earlier in the morning, which again was no problem. All this time I had been doing nothing but pimping the hell out of the airline, telling everyone how much I was looking forward to it, I think even on the show once or twice.

The day before our flight, I logged in to do online check-in, and noticed our “premium seating” had been revoked. No refund of the premium rate, no phone call, they had simply booted us into a standard row and I find out the day before we’re leaving. Clearly, I’m pissed off. I call the Customer Service line and in all fairness, I think I did a poor job of explaining the issue to the woman I spoke to. I think she read my anger as being just at the lack of refund, not the seating change and the lack of notification about the change. If they had properly notified me about the seating change, I might have been able to call earlier and possibly have gotten my seats back or at least been moved to another flight that day with seating available. I called back a second time, and this time explained my issue more clearly to a different person but he offered no help at all beyond an apology. After this I called my mother to let her know the time our flight was coming in, and of course mentioned the seating nonsense. She advised me to check to see if any of the front ‘bulkhead’ seating was open, as it was typically still more spacious than standard seating. I did that, and sure enough there were two seats open and I immediately called VA back and got a really helpful guy in Florida who got everything taken care of for me, which kept my head from exploding. He got my seats changed for the same cost as the refund, although after the failure to notify I felt I should have received the refund anyway, especially since we lost the emergency seats on the flight back which had not been changed at all. Regardless, at least we had potentially better seating. That was the end of the negative part of the experience, but it makes me wonder whether VA will be our first choice for an airline on our future trips. Incidentally, I have not seen that refund come through as of this writing and I will be contacting them on Monday if it hasn’t shown up by the end of the week.

Now the flight itself was great. Happily the bulkhead seating was actually better than emergency row seating, and the overall atmosphere of the plane was a very refreshing change. I have posted a Flickr photoset with pictures of the Linux based computer that each seat has, including the available features and some that aren’t quite in place yet. Overall the whole approach to flying that Virgin is trying to promote is fantastic. Three prong power outlets and USB ports in each seat, a flipside remote control that retracts into the seat arm and has a full QWERTY keyboard, on demand drink service and of course the 9” widescreen touch screen computer with a reasonably hefty free selection of movies, TV and reportedly 3,000 or so MP3s. There is also premium content, and it featured reasonably recent movies such as The Bourne Ultimatum and Resident Evil: Extinction. I tested out the chat feature, which would be really useful if you’re traveling with friends and aren’t seated together but otherwise is simply a cute thing to play with for 5 minutes. The drink ordering is a genius move when you think about it. Less of that wobbling cart gridlocking the aisle when you badly need to piss, the ability to get a drink when you’re actually thirsty instead of when they bring it to you. All around, the on demand model is a fantastic way to go for airlines. All in all, a very good actual flight. It’s really a shame that the customer service end was largely negative, because like most people I remember more of the bad than the good about the flight. In the end, will I fly Virgin again? Well, I’m going to send a letter to their customer service department and that answer will depend largely on the response.

And not that note, I bring a close to this rambling missive. We are planning to record something while I’m in Vegas, and depending on time I may have something for the 9th. Until then, take it easy.


          Computer Futures: C++ Software Engineer   
£40000 - £50000 per annum + competitive: Computer Futures: Were looking for a C++ Software Engineer to join a development organisation that provide solutions to digital industries. If you have the ability to develop using C++ for Linux and can work using Agile/XP practices, then you could be the Software Engineer Bristol
          Computer Futures: Software Engineer   
£30000 - £50000 per annum + competitive: Computer Futures: Were looking for a Software Engineer to join a development organisation that provide solutions to digital industries. If you have the ability to develop using C++ for Linux and can work using Agile/XP practices, then you could be the Software Engineer tha Bristol
          Jouer à de vieux jeux sous Linux #1 : se faire plaisir au-delà de la technique   
J’aime beaucoup les jeux vidéo d’aventure, encore plus les jeux de type pointer-et-cliquer (point & click). Mais jouer à un jeu vidéo sous Linux, c’est souvent compliqué voire impossible dans de rares cas ! Linux, c’est bien, c’est propre, ça marche (chez moi). Comme Windows, mais en plus fluide, plus rapide et plus légal 😀 …
          Comment on Will Stock Market Panic Humble San Francisco Real Estate by t.Com   
Skype has opened its web-dependent consumer beta to the entire world, after launching it extensively from the Usa and U.K. before this calendar month. Skype for Online also now supports Chromebook and Linux for immediate text messaging interaction (no video and voice however, individuals need a plug-in installation). The increase of the beta adds assistance for an extended set of different languages to aid strengthen that overseas functionality
          Comment on Make Money Online Without a Website by Karry   
Skype has opened its web-based client beta to the entire world, right after starting it largely within the Usa and U.K. earlier this calendar month. Skype for Online also now supports Chromebook and Linux for instant messaging conversation (no voice and video but, individuals require a connect-in installing). The increase in the beta contributes assistance for an extended set of languages to help bolster that international usability
          For once, LG may beat Samsung   
Samsung has been touting the latest strategy for Tizen — this time as an integrated OS for its smart TVs. It’s earned dozens of news stories this month, all tied to its promotional efforts for this week’s CES show in Las Vegas.

Samsung has always been better at announcing and publicizing Tizen strategies than it has been at executing on them. It did not skimp on the grandiloquent predictions when its original incarnation (then called Bada) was announced in November 2009:
Samsung Launches Open Mobile Platform: Samsung bada – The Next Wave Of The Mobile Industry
November 10, 2009
Samsung Electronics Co. Ltd., a leading mobile phone provider, today announced the launch of its own open mobile platform, Samsung bada [bada] in December. This new addition to Samsung's mobile ecosystem enables developers to create applications for millions of new Samsung mobile phones, and consumers to enjoy a fun and diverse mobile experience.

In order to build a rich smartphone experience accessible to a wider range of consumers across the world, Samsung brings bada, a new platform with a variety of mobile applications and content.

Based on Samsung's experience in developing previous proprietary platforms on Samsung mobile phones, Samsung can create the new platform and provide opportunities for developers. Samsung bada is also simple for developers to use, meaning it's one of the most developer-friendly environments available, particularly in the area of applications using Web services. Lastly, bada's ground-breaking User Interface (UI) can be transferred into a sophisticated and attractive UI design for developers.

Samsung will be able to expand the range of choices for mobile phone users to enjoy the smartphone experiences. By adopting Samsung bada, users will be able to easily enjoy various applications on their mobile.
Encouraged by Samsung, one analyst predicted that Tizen would make up “half of its portfolio by 2012.”

Instead, (according GSM Arena) only 11 bada models ever shipped — out of more than 3200 models during the past 5 yearsbefore bada was discontinued in favor of Tizen — a merger of bada and the Intel- and Nokia-flavored mobile Linuxes (among others).

Samsung announced its first Tizen phone — the Samsung Z  — June of 2014. A defeatured version of the Galaxy S5, it debuted not in Korea — or North America or Europe — but in Russia, suggesting the company did not think it could compete head to head with the latest Android and iOS phones. In fact, it was even ready for a third world BRIC country: the release was cancelled due to a lack of applications.

At CES this week, Samsung announced that Tizen would jump species — from its viral reservoir in rare smartphones and smartwatches — and become the only OS it uses for its smart TVs. I had three reactions.

First, so what? Yes, as the leading TV vendor Samsung can push out lots of copies of Tizen. But does anyone care what OS is in their VCR, DVR, Blu-ray, TV or home stereo? (I care about the OS in my car stereo — due to cellphone compatibility — but that’s a story for another time.)

Second, Samsung is saying: “let’s ship a platform in a product category where no one cares about app availability.” In other words, it may never win developer support for Tizen — and thus a large assortment of apps — but on TVs, who cares?

Finally, while Tizen frees Samsung from dependence on the evil Google, is shipping Tizen an asset for Samsung — or a liability?

Under the hood, Tizen has a very robust Linux, reflecting bada’s 2011 merger with MeeGo, which in turn built upon years of work by Nokia (with Maemo) and Intel (with its Maemo fork called Moblin). (It also included the failed Linux Mobile standard, LiMo).

However, a robust OS under the hood means nothing if it has a clunky UI. Exhibit A is the Symbian OS with Nokia’s aged S60 UI; Exhibits B-Z are every incarnation of desktop Linux known to mankind.

Which brings me to the dark horse: LG. I hadn’t noticed, but two years ago LG bought webOS, the failed Palm smartphone OS that HP owned for three years before dumping it. This week LG announced it’s using webOS for its own TVs.

Almost six years ago, webOS was a really good smartphone OS. But despite Palm’s efforts to double-down on its modern OS, it wasn’t enough to save the company. Now, webOS has a $100+ billion/year company behind it — and unlike with OS — a large volume of shipping products where it can run.

With a product strategy that usually consists of copying Samsung — much like Panasonic copied all its Japanese rivals — LG is rarely thought of as an innovative company. But here, instead of copying Samsung by developing its own lousy embedded OS, it bought a good one.

Again, will it matter? Will the TV OS matter more than screen size, brightness or — most importantly for a commodity product — price? As a former software guy, I want software to matter in providing differentiation. But I’m not going to bet even one dollar of our youngest’s college fund on it.
          KDE and Plasma: Release of Plasma 5.10.3, New ISO for Slackware Live PLASMA5, and KDE for FreeBSD   
  • Plasma 5.10.3

    Tuesday, 27 June 2017. Today KDE releases a Bugfix update to KDE Plasma 5, versioned 5.10.3. Plasma 5.10 was released in May with many feature refinements and new modules to complete the desktop experience.

  • KDE Plasma 5.10.3 Fixes Longstanding NVIDIA VT/Suspend Issue

    KDE Plasma 5.10.3 has been released as the newest bug-fix update to Plasma 5. For NVIDIA Linux users in particular this upgrade should be worthwhile.

  • New ISO for Slackware Live PLASMA5, with Stack Clash proof kernel and Plasma 5.10.2
  • GSoC - Week 3- 4

    Today, I will talk about my work on Krita during the week 3-4 of the coding period.

  • Let there be color!

    I've contributed to KDevelop in the past, on the Python plugin, and so far working on the Rust plugin, my impressions from back then were pretty much spot-on. KDevelop has one of the most well thought-out codebases I've seen. Specifically, KDevPlatform abstracts over different programming languages incredibly well and makes writing a new language plugin a very pleasant experience.

  • Daemons and friendly Ninjas

    There’s quite a lot of software that uses CMake as a (meta-)buildsystem. A quick count in the FreeBSD ports tree shows me 1110 ports (over a thousand) that use it. CMake generates buildsystem files which then direct the actual build — it doesn’t do building itself.

    There are multiple buildsystem-backends available: in regular usage, CMake generates Makefiles (and does a reasonable job of producing Makefiles that work for GNU Make and for BSD Make). But it can generate Ninja, or Visual Studio, and other buildsystem files. It’s quite flexible in this regard.

    Recently, the KDE-FreeBSD team has been working on Qt WebEngine, which is horrible. It contains a complete Chromium and who knows what else. Rebuilding it takes forever.


          Printfil 5.17   
Printfil allows printing from DOS, Unix, Linux, host programs to any Windows printer, including USB, GDI, network printers, fax printers and PDF writers, without changes to the original applications.
          We've got a new blog entry: New Linux Samba vulnerability and fix http://ift.tt/2s0fl61    

We've got a new blog entry: New Linux Samba vulnerability and fix http://ift.tt/2s0fl61 


          Sr Software Engineer ( Big Data, NoSQL, distributed systems ) - Stride Search - Los Altos, CA   
Experience with text search platforms, machine learning platforms. Mastery over Linux system internals, ability to troubleshoot performance problems using tools...
From Stride Search - Tue, 04 Apr 2017 06:25:16 GMT - View all Los Altos, CA jobs
          Linux Kernel NFSv4 Server /fs/nfsd/nfs4proc.c nfsd4_layout_verify UDP Packet denial of service   

A vulnerability, which was classified as problematic, has been found in Linux Kernel (the affected version is unknown). This issue affects the function nfsd4_layout_verify of the file /fs/nfsd/nfs4proc.c of the component NFSv4 Server. The manipulation as part of a UDP Packet leads to a denial of service vulnerability (crash). Using CWE to declare the problem leads to CWE-404. Impacted is availability.

The weakness was shared 05/05/2017 by Jani Tuovila as confirmed git commit (GIT Repository). The advisory is shared for download at git.kernel.org. The identification of this vulnerability is CVE-2017-8797. The attack may be initiated remotely. A single authentication is needed for exploitation. Technical details of the vulnerability are known, but there is no available exploit. The pricing for an exploit might be around USD $0-$5k at the moment (estimation calculated on 06/28/2017). The following code is the reason for this vulnerability:

if (!(exp->ex_layout_types & (1 << layout_type))) {

Applying a patch is able to eliminate this problem. The bugfix is ready for download at git.kernel.org. A possible mitigation has been published immediately after the disclosure of the vulnerability. The vulnerability will be addressed with the following lines of code:

if (layout_type >= LAYOUT_TYPE_MAX ||
   !(exp->ex_layout_types & (1 << layout_type))) {

The vulnerability is also documented in the vulnerability database at SecurityTracker (ID 1038790).

CVSSv3

VulDB Base Score: 4.3
VulDB Temp Score: 4.1
VulDB Vector: CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L/E:X/RL:O/RC:C
VulDB Reliability: High

CVSSv2

VulDB Base Score: 3.5 (CVSS2#AV:N/AC:M/Au:S/C:N/I:N/A:P)
VulDB Temp Score: 3.0 (CVSS2#E:ND/RL:OF/RC:C)
VulDB Reliability: High

CPE

Exploiting

Class: Denial of service / Crash (CWE-404)
Local: No
Remote: Yes

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: Patch
Status: Official fix
Reaction Time: 0 days since reported
0-Day Time: 0 days since found
Exposure Time: 0 days since known

Patch: git.kernel.org

Timeline

05/05/2017 Advisory disclosed
05/05/2017 Countermeasure disclosed
06/27/2017 SecurityTracker entry created
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources

Advisory: git.kernel.org
Researcher: Jani Tuovila
Status: Confirmed

CVE: CVE-2017-8797 (mitre.org) (nvd.nist.org) (cvedetails.com)

SecurityTracker: 1038790 - Linux Kernel NFSv4 Server Input Validation Flaw in pNFS LAYOUTGET Command Lets Remote Users Cause the Target Service to Crash

Entry

Created: 06/28/2017
Entry: 77.6% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_pinnacle.c snd_msnd_interrupt buffer overflow   

A vulnerability, which was classified as critical, was found in Linux Kernel up to 4.11.7. Affected is the function snd_msnd_interrupt of the file sound/isa/msnd/msnd_pinnacle.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. CWE is classifying the issue as CWE-119. This is going to have an impact on confidentiality, integrity, and availability.

The weakness was disclosed 06/28/2017. This vulnerability is traded as CVE-2017-9984 since 06/27/2017. Local access is required to approach this attack. A single authentication is required for exploitation. Technical details are known, but there is no available exploit. The structure of the vulnerability defines a possible price range of USD $5k-$25k at the moment (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

The entries 102883 and 102884 are pretty similar.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9984 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102883, 102884

Entry

Created: 06/28/2017
Entry: 72% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_pinnacle.c intr buffer overflow   

A vulnerability was found in Linux Kernel up to 4.11.7 and classified as critical. Affected by this issue is the function intr of the file sound/oss/msnd_pinnacle.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. Using CWE to declare the problem leads to CWE-119. Impacted is confidentiality, integrity, and availability.

The weakness was shared 06/28/2017. This vulnerability is handled as CVE-2017-9986 since 06/27/2017. The attack needs to be approached locally. A single authentication is needed for exploitation. There are known technical details, but no exploit is available. The current price for an exploit might be approx. USD $5k-$25k (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

Similar entries are available at 102882 and 102883.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9986 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102882, 102883

Entry

Created: 06/28/2017
Entry: 72% complete

          Linux Kernel up to 4.11.7 Message Queue msnd_midi.c snd_msndmidi_input_read buffer overflow   

A vulnerability has been found in Linux Kernel up to 4.11.7 and classified as critical. Affected by this vulnerability is the function snd_msndmidi_input_read of the file sound/isa/msnd/msnd_midi.c of the component Message Queue. The manipulation with an unknown input leads to a buffer overflow vulnerability. The CWE definition for the vulnerability is CWE-119. As an impact it is known to affect confidentiality, integrity, and availability.

The weakness was presented 06/28/2017. This vulnerability is known as CVE-2017-9985 since 06/27/2017. Attacking locally is a requirement. A single authentication is necessary for exploitation. Technical details of the vulnerability are known, but there is no available exploit. The pricing for an exploit might be around USD $5k-$25k at the moment (estimation calculated on 06/28/2017).

There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.

See 102882 and 102884 for similar entries.

CVSSv3

VulDB Base Score: 5.3
VulDB Temp Score: 5.3
VulDB Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:X/RL:X/RC:X
VulDB Reliability: High

CVSSv2

VulDB Base Score: 4.1 (CVSS2#AV:L/AC:M/Au:S/C:P/I:P/A:P)
VulDB Temp Score: 4.1 (CVSS2#E:ND/RL:ND/RC:ND)
VulDB Reliability: High

CPE

Exploiting

Class: Buffer overflow (CWE-119)
Local: Yes
Remote: No

Availability: No

Price Prediction: steady
Current Price Estimation: $0-$5k (0-day) / $0-$5k (Today)

Countermeasures

Recommended: no mitigation known
0-Day Time: 0 days since found

Timeline

06/27/2017 CVE assigned
06/28/2017 Advisory disclosed
06/28/2017 VulDB entry created
06/28/2017 VulDB last update

Sources


CVE: CVE-2017-9985 (mitre.org) (nvd.nist.org) (cvedetails.com)
See also: 102882, 102884

Entry

Created: 06/28/2017
Entry: 72% complete

          의외로 재미있는 Pornhub의 접속통계들   

예로부터 야한 컨텐츠와 첨단기술의 발전은 함께 해왔다고 해도 큰 무리가 없을 겁니다(?) 대표적으로 비디오 테이프 표준 전쟁이라는 예가 있죠.

미국에서 포르노 영화를 VHS로만 제공하면서 비디오테이프 표준 전쟁에 큰 영향을 끼쳤다고 평가한다는 의견도 있다.

요즘에도 로봇, 증강현실 등 많은 곳에서 첨단기술과의 융합을 시도하고 있을 겁니다(?)

이 글에서 이야기할 Pornhub는 스트리밍 동영상 기술이 발전하면서 나온 서비스입니다. 야동 스트리밍계의 유투브라고나 할까요? 비슷한 서비스가 여럿 있지만, 그 중에서도 꽤 큰 편이라고 합니다.


며칠 전 Pornhub에서 블로그를 통해 OS별 접속통계를 공개했는데 구경해보니 재미있는 부분이 좀 있더군요. 간추려서 올려봅니다.

데스크탑 OS

실제 사용과 큰 차이를 보이지는 않습니다만… NetMarketShare(2014년 7월 기준)에선 90%가 넘는 윈도우가 85%까지 줄고 6%대의 맥이 10%까지 뛰었군요.

데스크탑 윈도우 버전

7이 가장 많습니다. 2위는 8이 아니라 XP군요. 재미있는 점은 NT와 2000이 합쳐서 0.5%가량 된다는 것이죠. 게다가, Pornhub에선 매달 윈도우95와 98을 쓰는 수천 명의 사람들이 검색한다는데, 아직도 사용자가 있다는 이야기네요.

데스크탑 OS별 국가들

Windows by country
Mac by country
Linux by country

윈도우와 맥 모두 미국이 가장 많습니다. Pornhub 자체가 미국사이트이기도 합니다만, 우리 모두가 프록시를 ‘미국’으로 설정해서 쓰는 것도 이유가 될 것 같네요. 특이한 부분은 리눅스의 인도입니다. 미국에 이어 2위입니다. 차이도 크지 않네요. 역시 떠오르는 IT강국입니다(?)

모바일 OS

모바일 OS도 크게 다르지는 않습니다. 안드로이드가 앞선 모양새지만, iOS도 만만치 않고요. 블랙베리와 윈도우폰의 안습함이 여실히 드러납니다. 블랙베리가 %가 적은것이 업무용 폰이기 때문이라고 변명하기엔 좀 그렇겠죠?;; 그리고 의외로 삼성이 있습니다. 아마 타이젠이나 바다일텐데, 자세히 알 수 없는 점은 아쉽네요.

태블릿 OS

태블릿도 사용량 통계(StatCounter, 2014년 7월 기준)와 거의 비슷합니다만, iOS가 좀 더 높고 안드로이드가 더 떨어지는 모양새입니다. (역시 아이패드가 좋습니다…?) 태블릿에선 발 뺐다는 블랙베리가 의외로 꽤 되네요. 뒤이어서 윈도우 태블릿(아마도 서피스)도 있습니다. 성장세라는 뜻이겠죠?ㅎㅎ

게임콘솔

요새 게임기는 인터넷도 된다죠.. 플스2 이후로 콘솔을 손에 못 잡아봐서 어떨지는 모르겠지만, 플레이스테이션 3와 엑스박스가 대부분을 차지하고 있습니다. Kotaku에 따르면 플레이스테이션3가 3과 4가 합쳐진 것이고, 엑스박스가 1과 360이 합쳐진 것으로 보인다고 하네요. 의외로 플레이스테이션 비타와 닌텐도 3DS에서도 접속한다는 사실입니다. 저런 기기까지 지원하는 Pornhub쪽도 대단하네요ㄷㄷ

머문시간

duration on desktop OS
duration on mobile OS
duration on tablet OS
duration on game consoles

다들 비슷비슷합니다. 이런 이유겠죠?(…)

덧) 원문은 OS Battle - Porn by the Platform입니다.
덧2) 그런데 눌러봐야 warning.or.kr로 갑니다.
덧3) 그러니 warning.or.kr 디자인을 쌈빡하게 바꿔봅시다(?)
덧4) 차마 OS별 검색어통계는 못올리겠어요(...)


          By: JulesLt   
A little delayed, as for some reason I'd not correctly set up the RSS feed for the blog. Firstly, I'd like to say hats off to Mr.Fry for avoiding any mention of the Mac in his article, because it would have been both a distraction from the piece, and perhaps also detracted from the message ('ooh, look, it's one of those Mac users banging on about how secure the Mac is again'). Also because there have been Mac and Linux systems that have been found in botnets. How, you may ask? Well, while your basic operating system may be secure, there are a number of programs you can install that can make that irrelevant - for instance, if you're running a web server or database exposed to the Internet, you have a piece of software that's (a) sitting their listening to requests from the Internet (b) capable of running programs (not native Windows or OS X programs, but programs nonetheless. Some d/b and web servers can, for instance, send email). Now luckily, this is software that domestic users are currently unlikely to run. It's also unfair to blame the operating system in this case, but it does show that using OS X or Linux does not proof you against such things entirely.
          Linuxカーネルに見る、システムコール番号と引数、システムコール・ラッパーとは   

C言語の「Hello World!」プログラムで使われる、「printf()」「main()」関数の中身を、デバッガによる解析と逆アセンブル、ソースコード読解などのさまざまな側面から探る連載。前回から、printf()内のwrite()やint $0x80の呼び出しについてLinuxカーネルのソースコード側から探ってきた。今回は、さらにシステムコールについて学ぶ。




          Automated Driving Engineer ? Motion Planning - Delphi - Pittsburgh, PA   
Familiar with Linux development environment and utilizing Linux libraries. At Delphi, we are dreamers who do....
From Delphi - Sun, 18 Jun 2017 10:34:44 GMT - View all Pittsburgh, PA jobs
          Lead Engineer – Autonomous Vehicle Motion Planning - Delphi - Pittsburgh, PA   
Familiar with Linux development environment and utilizing Linux libraries. At Delphi, we are dreamers who do....
From Delphi - Wed, 14 Jun 2017 10:24:30 GMT - View all Pittsburgh, PA jobs
          Automated Driving R&D Engineer - Vehicle Control - Delphi - Pittsburgh, PA   
Familiar with Linux development environment and utilizing Linux libraries. At Delphi, we are dreamers who do....
From Delphi - Sun, 28 May 2017 10:27:04 GMT - View all Pittsburgh, PA jobs
          Unmet dependencies   
Hi, I'm relatively new to Ubuntu admin. It is one of the downsides of Ubuntu that you need to be a techie to do almost anything outside the GUI such as installing printers. Until that is fixed Ubuntu, and Linuxm isn't going to make it mainstream. Can't understand why doing a basic function like...
          [ubuntu] Ransomware   
Hey guys with Ubuntu or any other Linux distro this ransomware does it affect Linux systems and what to do to prevent it sorry for the dumb question just trying to cover my tracks even with Linux
          Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!! - Jelenlegi ára: 1 Ft   
Univerzális USB 2. 0 Memória kártya-olvasó
480 Mb/s sebesség
Kompatibilis: USB 1. 1 és 2. 0
4 kártya foglalat
Támogatott kártya típusok: Micro MS/ M2/ SD/ MMC/ SDHC/DV/MS DUO/ MS PRO DUO/ Micro SD/T-Flash
Kompatibilis operációs rendszerek: Windows 7/VISTA/XP/2000/ME/98SE/98, Mac OS X 9. 0 és Linux 2. 4 vagy újabb verziói
Vékony kialakítás, kompakt méret
Plug & Play: behelyezés után egyből működik
Támogatott kártyaméret: max 32 Gb
Anyaga: műanyag
Méret: 66 x 21 x 16 mm
Súly: 14 g
Szín: véletlenszerű
A termékek külföldről érkeznek, emiatt a szállítási idő 15-30 munkanap, kérem mindenki ennek tudatában licitáljon!
Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!!
Jelenlegi ára: 1 Ft
Az aukció vége: 2017-06-29 01:20
          gpivtrig   

This package contains a kernel module to trigger a CCD camera and a (double-cavity Nd_YAG) laser for the so-called (Digital) Particle Image Velocimetry (PIV) and other image analysing techniques for fluid flows.

The software sends the TTL trigger pulses to the
camera, connected to the first pin of the parallel port and to the lasers, connected to the second and third pin, with a pre-scribed delay. As the application runs under RealTimeLinux and RTAI, the timings of the trigger pulses are well defined (i.e. the CPU load of the system is few critic).

The building system is very prelimary as I don't have any idea how to do it properly. Any suggestions are welcome.


          The Linux Link Tech Show Episode 713   
Some of this and some of that.
          Podcasts: scaricarli su Windows, Mac e Linux con gPodder   
Pubblicato in: , , , , ,

I podcasts su internet si stanno moltiplicando e ne esistono di ogni argomento e categoria. Si tratta di una serie di contenuti digitali audio o video che vengono rilasciati a puntate e che possono essere…
          aTunes, il media player cross platform ed open source che vi stupirà   
Pubblicato in: , , , , , ,

State cercando un programma che batta le performances di iTunes, Winamp o Amarok? Non importa che sistema operativo voi abbiate, se date un’occhiata ad aTunes vi convincerete. Questo media player sfrutta come…
          System Administrator - Linux - State of Colorado Job Opportunities - Golden, CO   
Coordinates and interacts with other CCIT staff, particularly front-line user support staff (e.g. CCIT provides support for enterprise, departmental and...
From State of Colorado Job Opportunities - Mon, 26 Jun 2017 19:34:27 GMT - View all Golden, CO jobs
          PuppyRus 2.0 Snow Dog Modern [i386] (1xCD)   
PuppyRus 2.0 Snow Dog Modern [i386] (1xCD)


PuppyRus Linux - это уникальная в своём роде операционная система, основанная на Linux. Cистема компактна, быстра и проста. Это Live-CD дистрибутив, который разрабатывается и поддерживается командой энтузиастов, постоянно работающей над улучшением, исправлением, русификацией и расширением функционала PuppyRus Linux OS.
          Comment on The Architecture of a Crawler by dcat   
I'm pretty busy for the next few weeks or months. I can be contacted via any of the methods listed on my <a href="http://www.trillinux.org/contact.html" rel="nofollow">contact page</a>.
          james: RT @schestowitz Newsflash: PROPRIETARY software for #security is itself a security menace https://social.jamestechnotes.com/url/20659   
RT @schestowitz Newsflash: PROPRIETARY software for # is itself a security menace https://social.jamestechnotes.com/url/20659
          Týden na ScienceMag.cz: Topologické kvantové počítače - ABCLinuxu.cz   


ABCLinuxu.cz
Získává tak 2,5 procenta v mladé firmě vzešlé z výzkumu na brněnském CEITEC VUT. Startup umí jednoduše propojit zobrazení ze dvou typů mikroskopů a tím zpřesnit a zároveň urychlit práci se vzorky. Vidět hloubku i členitost miniaturních vzorků je přitom ...



          Publicité : BE LINUX   

Aujourd'hui un petit billet de Stéphane, qui est un de mes collègues de promo. Je l'ai invité à réagir sur Narcissique Blog de temps en temps. Vous verrez donc quelques billet de lui prochainement. Bonne lecture.

Pierre

---

Nous avions déjà vus des pub Microsoft, les reconnaissables pub d'Apple, et même dernièrement des pub pour eBay !
Mais voila du nouveau : des publicités pour des programmes Open Source !

Cette publicité à été réalisé pour LINUX !
Je vous laisse voir :










Personnellement les TUX m'ont déjà convaincue depuis longtemps ! ;-)

Plus concrètement je trouve cette pub très réussie et le style très sympa !
Et vous ?

Stéphane


          移植linux 3.x内核,Calibrating delay loop...   
尝试移植kernel 3.16.44内核到imx.287开发板,使用kernel内mxs_defconfig,imx28-evk.dts,修改dts串口映射,使能内核low-level debug, 编译后通过mfgtool下载到开发板,打印如下,停止在Calibrating delay loop...,是我的dts有问题还是kernel配置错误,请调试过的朋友 ...
          新手关于287A内核编译的问题   
管理员&各位网友: 新入手287A V1.13开发板一块。从网站下载到V1.05的资料,下载了官网的虚拟机。按照手册编译内核,提示没有编译器,后发现虚拟机的编译器为arm-none-linux-gnueabi-,改了官网提供的内核的Makefile, ...
          ลินุกซ์เพื่อความเป็นส่วนตัว Tails 3.0 ออกแล้ว ใช้ Debian 9 เป็นฐาน   

Tails ลินุกซ์สำหรับความเป็นส่วนตัวโดยเฉพาะออกรุ่น 3.0 โดยความเปลี่ยนแปลงใหญ่คือใช้ Debian 9 เป็นฐานแล้ว ทำให้ได้รับอัพเดตซอฟต์แวร์ใหม่ๆ ตามมา เช่น KeePassX เป็นรุ่น 2.0.3, LibreOffice 5.2.6

ความสามารถอย่างหนึ่งของ Tails 3.0 คือการทำลายข้อมูลในหน่วยความจำทั้งหมดเมื่อบูตเครื่องหรือชัตดาวน์ หรือหากดึงไดรฟ์ที่ใช้บูตออกจากเครื่องฟังก์ชั่นทำลายข้อมูลก็จะทำงานทันทีเช่นกัน

ในเวอร์ชั่นนี้ยกเลิกการรองรับซีพียู 32 บิตทั้งหมด โดยจะใช้ความสามารถใหม่ๆ ของซีพียู 64 บิต เช่น NX, PIE

ที่มา - Tails

No Description


          Debian 9.0 "Stretch" ออกแล้ว   

ลินุกซ์เดเบียนออกเวอร์ชั่น 9.0 แล้วอัพเดตซอฟต์แวร์ที่รองรับให้เป็นซอฟต์แวร์ยุคใหม่ ตัวเคอร์เนลใช้ Linux 4.9 ซึ่งเป็นรุ่น longterm ตัวล่าสุด ออกมาเมื่อปลายปี 2016 ความเปลี่ยนแปลงในซอฟต์แวร์ย่อยๆ มีอีกหลายอย่าง เช่น

  • เปลี่ยนมาใช้ MariaDB เป็นมาตรฐานแทน MySQL
  • เปลี่ยนมาใช้ Firefox/Thunderbird อีกครั้ง
  • โค้ดสามารถคอมไพล์แบบทำซ้ำได้ (reproducible) ถึง 94% ของซอร์สโค้ดทั้งหมด
  • X display ไม่ต้องรันด้วย root แล้ว
  • ซอฟต์แวร์หลักๆ อัพเดตเวอร์ชั่น เช่น Apache 2.4.25, Chromium 59.0, PHP 7.0, Ruby 2.3, Python 3.5.3, Golang 1.7

รองรับสถาปัตยกรรมซีพียู mips64el เพิ่มเข้ามาและถอด PowerPC ออกจากการรองรับ ระยะเวลาซัพพอร์ต 5 ปีนับจากวันที่ออกไป

ที่มา - Debian

Topics: 

          Ubuntu จะเปลี่ยนตัวจัดการแสดงผลไปใช้ GDM ของ GNOME แทน LightDM เดิม   

Ubuntu เตรียมเปลี่ยนจาก LightDM มาใช้ GDM (GNOME Display Manager) เป็นตัวจัดการแสดงผล (display manager) หลักบน Ubuntu เวอร์ชัน 17.10 และ 18.04 LTS พร้อมกับการเปลี่ยนจาก Unity มาเป็น GNOME

ในตอนแรกนั้น ทีมงานพยายามจะทำให้ GNOME Shell เป็น LightDM Greeter แต่เนื่องจากการแยกโค้ดนั้นไม่ง่าย ซึ่งจากการชั่งน้ำหนักความเสี่ยงและปริมาณงานที่ต้องทำในการแก้ไข GNOME ให้รองรับ LightDM ทีมงานจึงตัดสินใจเลือกใช้ GDM แทน

ปัจจุบัน ระบบจัดการแสดงผลนี้มีหน้าที่สำคัญอย่างหนึ่งคือการแสดงผลในหน้าล็อกอิน ดังนั้นหากเปลี่ยน LightDM เป็น GDM จริง เราน่าจะเห็นหน้าจอล็อกอินและหน้าจอล็อกของ Ubuntu เปลี่ยนไปด้วย

ส่วน LightDM ในอนาคตจะเหลือเพียงการสนับสนุนเฉพาะการแก้บั๊กจากทีม Ubuntu Desktop เท่านั้น และจะสนับสนุนเฉพาะรุ่น Ubuntu ที่อยู่ในระยะการสนับสนุนเท่านั้น

ที่มา - OMG! Ubuntu!


          Skype for Linux จะหยุดทำงาน 1 ก.ค. 2017 ผู้ใช้ต้องอัพเกรดเป็น Skype Beta   

Skype ประกาศว่าไคลเอนต์ตัวเก่าบนลินุกซ์ (Skype for Linux 4.3) จะหยุดทำงานในวันที่ 1 กรกฎาคม 2017 ผู้ใช้ลินุกซ์จำเป็นต้องอัพเกรดไปใช้ไคลเอนต์ตัวใหม่แทน

ไคลเอนต์ตัวเก่าเขียนด้วย Qt และทำงานแบบเนทีฟ มันถูกอัพเดตครั้งสุดท้ายในปี 2014 ส่วนไคลเอนต์ตัวใหม่นับเลขเวอร์ชันเป็น 5.x เขียนด้วย Electron และยังมีสถานะเป็น Skype Beta (เวอร์ชันล่าสุดคือ 5.2) อย่างไรก็ตาม Skype Beta อาจยังขาดฟีเจอร์สำคัญหลายตัวที่มีในแพลตฟอร์มอื่น

Skype Beta สามารถดาวน์โหลดได้จากหน้าเว็บของ Skype มีให้เลือกทั้งเป็น .deb และ .rpm

ที่มา - OMG Ubuntu

No Description

Topics: 

          แอนดรอยด์จะปรับมาใช้เคอร์เนลลินุกซ์เวอร์ชันใหม่ขึ้น กำลังทำเคอร์เนล 4.4   

ปัญหาอย่างหนึ่งของแอนดรอยด์ คือการใช้เคอร์เนลลินุกซ์ที่เก่ามาก (แอนดรอยด์ใช้ 3.18 ที่ออกในปี 2014 ส่วนเคอร์เนลเวอร์ชันล่าสุดคือ 4.11)

Dave Burke หัวหน้าทีมวิศวกรรมแอนดรอยด์ ตอบคำถามเรื่องนี้ในบทสัมภาษณ์กับ Ars Technica ว่าเคอร์เนล 3.18 เป็นเคอร์เนลที่ซัพพอร์ตระยะยาว (LTS) ซึ่งมันหมดอายุซัพพอร์ตแล้วในเดือนมกราคม 2017

ปัญหาของเรื่องนี้เกิดจากแผนการซัพพอร์ต LTS ของลินุกซ์กับแอนดรอยด์มีระยะเวลาไม่เท่ากัน ลินุกซ์มีระยะซัพพอร์ต 2 ปี ส่วนแอนดรอยด์มีระยะซัพพอร์ตความปลอดภัย 3 ปี ทางแก้คือทีมแอนดรอยด์กำลังเจรจากับทีมลินุกซ์ให้ระยะการซัพพอร์ตยาวนานขึ้น

ตอนนี้แอนดรอยด์กำลังพัฒนาเคอร์เนลเวอร์ชัน 4.4 ใช้งานกันอยู่ภายใน ส่วนเป้าหมายในระยะยาวคือพยายามใช้เคอร์เนลที่ใหม่ขึ้น แต่ก็ต้องขึ้นกับผู้ผลิตชิปเซ็ต (เช่น Qualcomm) ช่วยซัพพอร์ตด้วยอีกทาง

ที่มา - Ars Technica, ภาพจาก Android

No Description


          ช่องโหว่ใน sudo ทำให้ผู้ใช้ทั่วไปเขียนไฟล์ด้วยสิทธิ์ root ได้, ยึดเครื่องได้ในที่สุด   

ทีมความปลอดภัย Qualys รายงานถึงช่องโหว่ของโปรแกรม sudo ที่ลินุกซ์ใช้เพื่อเปิดสิทธิ์ให้ผู้ใช้ทั่วไปเข้าถึงสิทธิ์ root บางส่วน กลับสามารถเขียนทับไฟล์ใดๆ ในเครื่องได้ จนนำไปสู่การยึดสิทธิ์ root ไปทั้งหมด

ความผิดพลาดนี้เกิดจากการอ่านข้อมูลจาก /proc/[pid]/stat ที่ไม่ได้คำนึงว่าชื่อไฟล์อาจจะมีช่องว่างอยู่ ทำให้การอ่านค่าผิดพลาด เมื่อแฮกเกอร์สร้างสคริปต์ที่เรียก sudo โดยตัวสริปต์เป็นชื่อไฟล์ที่มีช่องว่างเพื่อเจาะระบบ

ตอนนี้ลินุกซ์หลักๆ ล้วนออกแพตช์หมดแล้ว Ubuntu นั้นได้รับผลกระทบตั้งแต่เวอร์ชั่น 14.04 ขึ้นไป ส่วน RHEL ได้รับผลกระทบในเวอร์ชั่น 5, 6, และ 7

ที่มา - OpenWall, Red Hat Security Advisory, Ubuntu

Topics: 

          ปลดแอกจาก systemd, Devuan ออกรุ่น 1.0 ตัวจริงแล้ว   

ลินุกซ์รุ่นใหม่ๆ มักใช้ systemd เป็นส่วนประกอบสำคัญ แม้ว่าตัว systemd จะทำให้องค์ประกอบของลินุกซ์มีความทันสมัยขึ้นมาก (บูตเร็ว, จัดการความเชื่อมโยงระหว่างบริการต่างๆ) แต่ความซับซ้อนของมันก็ทำให้ผู้ดูแลระบบจำนวนมากไม่ชอบ ตั้งแต่ Debian ตัดสินใจใช้ systemd ก็มีนักพัฒนากลุ่มหนึ่งประกาศไม่ยอมรับและแยกโครงการเป็น Devuan ตอนนี้โครงการก็มาถึงจุดที่ประกาศพร้อมใช้งานแล้ว

Devuan Jessie 1.0 ประกาศเป็นโครงการซัพพอร์ตระยะยาว โดยจะออกแพตช์ยาวนานกว่า Debian Jessie เองเสียอีก (ผมยังไม่พบเอกสาร life cycle ว่าจะซัพพอร์ตไปนานแค่ไหน)

หลังเปิดตัวรุ่น 1.0 ตอนนี้ Devuan ก็มีดิสโทรที่นำไปพัฒนาต่อแล้วจำนวนหนึ่ง เช่น Gnuinos ที่ระบุว่าต้องเป็นซอฟต์แวร์เสรี 100% หรือ Dowse ลินุกซ์สำหรับ IoT และยังมีบริษัทสองบริษัทประกาศรับซัพพอร์ตสำหรับองค์กรธุรกิจ

Devuan ใช้ระบบ init เป็น sysvinit เหมือนเดิมแต่สามารถเลือก openrc, runit, และ sinit มาใช้งานได้

ที่มา - Devuan

Topics: 

          Windows 10 S ไม่มีคอมมานด์ไลน์, ไม่มี PowerShell, ไม่สามารถรันลินุกซ์ได้   

จากข่าว Ubuntu, SUSE, Fedora ลง Windows Store ไมโครซอฟท์ออกมาอธิบายข้อมูลเพิ่มเติมว่าความสามารถนี้ใช้ไม่ได้กับ Windows 10 S แม้ว่าจะเป็น "แอพ" จาก Store ก็ตาม

เหตุผลของไมโครซอฟท์คือไม่ใช่แอพทุกตัวจาก Store ที่จะใช้งานได้กับ Windows 10 S ตัวอย่างแอพเหล่านี้คือคอมมานด์ไลน์ เชลล์ คอนโซล ซึ่งรวมถึง cmd.exe, PowerShell และ WSL ด้วย

ไมโครซอฟท์อธิบายว่า Windows 10 S ออกแบบมาสำหรับผู้ใช้ที่ไม่ใช่กลุ่มเทคนิค (non-technical users) เช่น ครู นักเรียน ศิลปิน ฯลฯ ที่ไม่ต้องการปรับแต่งแก้ไขคอมพิวเตอร์ของตัวเองมากนัก แต่เน้นคอมพิวเตอร์ปลอดภัย เสถียร ทำงานได้เร็ว

ส่วนกลุ่มนักพัฒนาแอพ แอดมิน หรือผู้เชี่ยวชาญด้านไอที ไมโครซอฟท์แนะนำให้ใช้ Windows 10 ตัวปกติแทน

ที่มา - Microsoft

No Description


          Halloween 2013 Build using a Raspberry Pi   

 

This year for Halloween I wanted to start to scratch the surface of what I could do with a Raspberry Pi in order to bring a “haunted” experience to my front yard. My guidance/inspiration was from here.

Overall, I wanted to have a single motion sensor trigger a sequence of scenes where each scene would incorporate one or more lights, one or more audio tracks and be limited to about 30 seconds or less.

Parts List

I broke the parts list up into two groups, the first is the items you probably would need to buy online, the second you should be able to get most of them from your local Home Depot, Lowes or other hardware store.

Online parts list:

  1. 1 x Raspberry Pi, Model b

Raspberry Pi Model B 512MB RAM

  1. 1 x USB Powered Speakers (externally powered)

USB Powered Speakers

  1. 1 x PIR Motion Sensor

PIR (motion) sensor

  1. 1 x Breakout board/Extension wires, Male->Female, 4 to 6 inches in length

Premium Female/Male 'Extension' Jumper Wires - 40 x 12

  1. 5v 1 x 1Amp USB Port power supply with an A/Micro B cable

5V 1A (1000mA) USB port power supply - UL ListedUSB cable - A/MicroB

  1. 1 x 4GB SDHC MicroSD Memory Card

4GB SD card for Raspberry Pi preinstalled with Raspbian Wheezy

  1. 1 x Low-Profile microSD card adapter for Raspberry Pi

Low-profile microSD card adapter for Raspberry Pi

  1. 6 x Solid State Relays, 3v to 110v+
    1. Essentially you apply 3 volts to one pair of terminals, which will turn on the 100v pair.

SSR-10DA Solid State Relay - White + Silver

Optional online parts:

  1. You may need a HDMI (Male) to VGA Adapter if you do not have an HDMI compatible TV or Monitor.  To initially install the Pi OS, you will need a way to output the video.
  2. You may also need some sort of USB keyboard, wired or wireless, most just work.  If you don’t have one one hand, you should consider getting one of the smaller form factor wireless keyboards which you can reuse and is portable along with your Pi’s.
  3. Finally you also need a way to give this device access to your network.  If you have a router with a spare LAN port you can use it, just be sure you have a Cat5e networking cable (male on both ends) available for the project.  Otherwise a hub will be just as suitable.  As an alternative you can find many mini WiFi USB dongles that are compatible for the Pi. I chose to NOT use the WiFi dongle because it is an additional cost.  I will not cover of WiFi configuration in this post, there are many online articles outlining the process.

You can buy most of these from adafruit.com, dx.com, or any other favorite retailer. 

 

Hardware store parts list:

  1. 4 x (double) receptacle wall sockets [NEMA 5-15 (15A/125V earthed) Type B] Commercial Duplex Receptacle 15 Amp 125v, Gray
  2. 3 x electrical boxes
    1. 1 x single width
    2. 1 x double width
    3. 1 x triple width

  1. 1 x double electrical face plate, “blank” or cover
  2. 1 x single receptacle wall socket face plate
  3. 1 x triple receptacle wall socket face plate

 

1 Gang Duplex Receptacle Plate, White - 10 Pack 3-Gang Midway Nylon Blank Wallplate, in White

 

  1. 20 x twist-on wire connectors

Can-Twist One size fits all Wire Connector  300/tub

  1. 1 x 50 feet of electrical cable, of appropriate gauge for your country
  2. 1 x Bag of electrical cable staples, probably need 50 to 60GB GB 1/2 In Plastic Staple, White (15-Pack)
  3. 1 x bag of wood screws, 3” long, at least 24 screws
  4. 1 x bag of wood screws, 2” long, at least 12 screws
  5. 1 x Plywood, 2 feet squared, about 1” thick  (bottom base)
  6. 2 x 2x4, 2 feet in length (sides of base)
  7. 1 x 1x2 2 feet in length (front)
  8. 1 x Plywood, 1.5 feet squared, (back / relay anchor)

Box Construction:

Take the base plywood (2x2x2) and secure the 2x4’s to each opposite side.  Add the 1x2 to one end, and finally the 1.5 foot squared plywood to the back. 

This will form an awkward sided box without a lid.  This is by design.  It will provide enough room for you to secure the two larger electrical box’s to the front left and right, and the smaller box on the left side, in the middle.  Finally each relay can be secured to the back / relay anchor piece.

Since a picture is worth a thousand words, here are a few to help out.

WP_20130920_004WP_20130920_006WP_20130920_005WP_20130920_007WP_20130920_008WP_20130920_009WP_20130921_001WP_20130921_003WP_20130921_002

Notice the smaller receptacle box on the left.  It is “Always On”, meaning we hardwire power directly to both of those receptacles.  The larger box on the right, with the three pairs of receptacles, those are run through the relay switches.  Finally, the box on the front left is where we take the main line from the house, and split it into 7 (Always on, and the 6 via the relays).

Note: If I were to do this box again, I would probably use a terminal strip, instead of the lower left box to split the house main into the 7. 

Here is a photo of the same box, with the relay switches installed:

WP_20131127_004

Next secure your relay switch’s to the back, like you see in the above picture.  Do your best to make them level, straight and in the middle of the box. 

Take your 50 feet of electrical wire and run a single line to the Always On box, and then 6 to the back, up and over, and then down and forward to the 6 receptacles on the front right.  You will need to split the “HOT” of each of these lines up on the back and wire each HOT into the relay switch.  Take care to find out online which exact color is “HOT” / “Phase” / “Live” for your house electrical wiring.  For example, in Canada, black is set to be “Live” or “HOT”.  Split it, to the set of terminals on the relay switch.  Notice I also use secure each split with an appropriate and twist caps.

Use the electrical staples to secure each line down to the box as best as you can, and clean up the box until you are happy with it.

WP_20131127_001WP_20131127_003WP_20131127_008WP_20131127_010WP_20131127_011WP_20131127_012

 

Raspberry Pi Construction

You will need to first download the NOOBs installation media, and follow the instructions here to perform the installation of the “Rasbian” operation system.  It is essentially Linux slimmed down for the Pi.  Be sure to use the MicroSD card you ordered for the project earlier.

Now unbox the RaspberryPi, MicroSD Adapter, and USB Power.  Plug your MicroSD card into the adapter, and that into your Pi.  I personally use the HDMI output to my living room television, and you will need some sort of video to perform the initial install on the Pi itself; ensure that this is plugged in as well.  Make sure you also plug in your keyboard and your networking cable from a switch or router.

Finally apply power to the device with your USB Power adapter.

If it is not that obvious of what is what on the Pi, here is useful image outlining the position of each component.

NOTE: The Raspberry Pi does not have any sort of onboard protection for hot-swapping anything from USB, HDMI, or other GPIO Ports.  Make sure you shut the device down appropriately before trying to plug anything in or out of it. Failing to do so may result in frying the little guy!

Follow the instructions to finish the installation of the “Rasbian” operating system, and eventually you should end up at a login prompt.  Take note of the IP Address, it should be near the login prompt.  Login with the default user name and password (pi / raspberry) for the Rasbian operating system.    If you missed the IP Address on the login screen, use the command “ifconfig” in the terminal, its output will contain the device’s IP address.

Right now you should be able to login to the Pi, via SSH.  If you are on Windows, putty is a suitable tool which you can SSH into the machine.  Use the IP address to connect via putty to SSH on your Pi.

Raspberry Pi & Python

Rasbian comes with python already installed, test it out by typing: 

python

At the command prompt, which then you should end up in the python “RELP” (Read, Eval, Print, Loop),  try this command:

>> import this

You should see “The Zen of Python”, some guidelines for writing python code.

   1:  >>> import this
   2:  The Zen of Python, by Tim Peters
   3:   
   4:  Beautiful is better than ugly.
   5:  Explicit is better than implicit.
   6:  Simple is better than complex.
   7:  Complex is better than complicated.
   8:  Flat is better than nested.
   9:  Sparse is better than dense.
  10:  Readability counts.
  11:  Special cases aren't special enough to break the rules.
  12:  Although practicality beats purity.
  13:  Errors should never pass silently.
  14:  Unless explicitly silenced.
  15:  In the face of ambiguity, refuse the temptation to guess.
  16:  There should be one-- and preferably only one --obvious way to do it.
  17:  Although that way may not be obvious at first unless you're Dutch.
  18:  Now is better than never.
  19:  Although never is often better than *right* now.
  20:  If the implementation is hard to explain, it's a bad idea.
  21:  If the implementation is easy to explain, it may be a good idea.
  22:  Namespaces are one honking great idea -- let's do more of those!
  23:  >>>

In order to exit the python REPL, you need to hit Control-Z.

 

Creating your first python script

  1. 1. At the SSH prompt, type:
    1. touch hello.py
    2. chmod +x hello.py
    3. nano hello.py

“touch hello.py” will create a new, empty file in the current folder.

“chmod +x hello.py” will set the “execute” flag on hello.py, this allows us to execute it directly.

“nano” is a simple text editor available on the Pi, you should be in the editing window, “hello.py”, which is now empty.  Type in the following code:

 
   1:  #!/usr/bin/env python
   2:  print("Hello, world")

Save the document by hitting Control-O, then hit enter.  To exit, hit Control-X.  This should dump you back out to the command prompt.  In order to execute our script we type:

./hello.py

This should simply print out “Hello, world”, and then exit you back to the prompt.

Success!

Also installed by default is the ability to use the GPIO (General Purpose Input/Output) ports in python.  We will use the GPIO ports to apply 3v’s to our Solid State Relays. 

You will notice that on the PI, there are two rows of Pins sticking out, these are the GPIO pins, here is a diagram which explains each Pin:

Raspberry-Pi-GPIO-Layout-Revision-2

 

Take note of all of the Green GPIO pins.  They are what we will be connecting to our Relay Switch’s.  We will cover more about these GPIO pins and Python in a section further down.

 

Raspberry Pi and the Box

Wiring the Pi up to the Box, more specifically the Solid State Relays, is done by first taking a single ground wire from each Solid Sate Relay’s negative/ground terminal (-) to each other in a chain, like the following image.

WP_20131127_007

You can take the Breadboard wires and use them, strip of the plastic tips and about 1/2 inch of the wire sheath to expose the bare metal wire inside.  I actually take a single wire from the first to the second, and another from the second to the third and so on, after the final relay you take a final wire out to the Ground pin on the Pi.  On this final wire, leave the Female tip which will make it easy to plugin into the Pi.  That solves the common ground for each Solid State Relay.

In order to finish this you will need to take a single breadboard wire (female end) from a GPIO port on the PI to the Solid State Relay (male end), positive terminal.  Repeat this for each GPIO to Relay terminal.  It does not really matter which GPIO port is used, just write each down and their relationship to the final receptacle box of 6.

In the end you will see the following:

WP_20131127_002

Now that you have the Pi online, connected to the Box via the Relay Switch's, and the switch's controlling power to our bank of 6 receptacles, we are ready to start writing more python scripts to control the lights!

 

Sample Scripts

The best way to get started programming for the Raspberry PI and its GPIO pins is to read over the RPi.GPIO module basics. Please review that prior to continuing.

Below is a set of scripts which should be enough to get you started with working with python and GPIO ports.  I did my best to document the code where it was appropriate.  If you need more assistance with python there are a plethora of online tutorials which you can learn from. 

on_off.py

   1:  #!/usr/bin/env python
   2:  import time 
   3:  import RPi.GPIO as io 
   4:   
   5:  # https://code.google.com/p/raspberry-gpio-python/wiki/BasicUsage
   6:  io.setmode(io.BCM)
   7:  io.setwarnings(False)
   8:   
   9:  light_one = 24
  10:   
  11:  io.setup(light_one, io.OUT)
  12:   
  13:   
  14:  def light(light_pin, state):
  15:      print("light control:" + str(light_pin) + " " + str(state))
  16:      io.output(light_pin, state)
  17:   
  18:  print("Turning light on")
  19:  light(light_one, True)
  20:  time.sleep(1)
  21:  print("Turning light off")
  22:  light(light_one, True)
  23:  print("All done")

allon_off.py

   1:  #!/usr/bin/env python
   2:  import time 
   3:  import RPi.GPIO as io 
   4:   
   5:  # https://code.google.com/p/raspberry-gpio-python/wiki/BasicUsage
   6:  io.setmode(io.BCM)
   7:  io.setwarnings(False)
   8:   
   9:  light_one = 24
  10:  light_two = 23
  11:   
  12:  io.setup(light_one, io.OUT)
  13:  io.setup(light_two, io.OUT)
  14:   
  15:   
  16:  def light(light_pin, state):
  17:      print("light control:" + str(light_pin) + " " + str(state))
  18:      io.output(light_pin, state)
  19:   
  20:  print("Turning lights on")
  21:  light(light_one, True)
  22:  light(light_two, True)
  23:  time.sleep(1)
  24:  print("Turning lights off")
  25:  light(light_one, True)
  26:  light(light_two, True)
  27:  print("All done")

That should be enough to get you started with controlling your receptacles.  Next we dive into some sample code for playing sound.  Plug in your speakers to the audio jack.  I found some sample sound bytes online, and used Audacity to convert them to 2 Channel MP3s if they were not already in that format.

playsound.py

   1:  #!/usr/bin/env python
   2:  import pygame 
   3:  import os
   4:  import time
   5:   
   6:  pygame.mixer.init()
   7:  pygame.init()
   8:   
   9:   
  10:  def playTrack(track):
  11:      print("playing track:" + track)
  12:      pygame.mixer.music.load(track)
  13:      pygame.mixer.music.play()
  14:      while pygame.mixer.music.get_busy():
  15:          time.sleep(0.5)
  16:   
  17:   
  18:  print "ready to rock"
  19:   
  20:  while True:
  21:      for files in os.listdir("."):
  22:          if files.endswith(".mp3"):
  23:              print files
  24:      sound = raw_input("Song? ")
  25:      playTrack(sound)
  26:      time.sleep(1)

Our last sample script demonstrates using the PIR motion sensor with the Pi, in this example I’m using GPIO port 25.  Notice that it is setting the io port to io.IN instead of io.OUT like our switch’s above.

motion.py

   1:  #!/usr/bin/env python
   2:  import time 
   3:  import RPi.GPIO as io 
   4:   
   5:  io.setmode(io.BCM)
   6:  io.setwarnings(False)
   7:   
   8:  pir_pin = 25
   9:  io.setup(pir_pin, io.IN)
  10:   
  11:  def onMovement():
  12:      print("movement!")
  13:   
  14:  print "ready to rock"
  15:  while True:
  16:      if io.input(pir_pin):
  17:          onMovement()
  18:      time.sleep(300)

 

In this script we setup an infinite loop with the “While True”.  This allows for our script to sit and wait for motion.  Once the PIR detection motion we immediately call the “onMovement” function.

Remember for each script file you create to run the “chmod +x script.py” on the pi.  This will allow you to test it by just running the script name itself at the command line:

[$] > playsound.py

 

Putting it all together

Here is my final script for executing scenes based on motion.

   1:  #!/usr/bin/env python
   2:  import time 
   3:  import RPi.GPIO as io 
   4:  import pygame 
   5:  import subprocess 
   6:  import time 
   7:   
   8:  io.setmode(io.BCM)
   9:  io.setwarnings(False)
  10:   
  11:  pygame.mixer.init()
  12:  pygame.init()
  13:   
  14:  pir_pin = 25
  15:  light_4 = 4
  16:  light_17 = 17
  17:  light_18 = 18
  18:  light_23 = 23
  19:  light_22 = 22
  20:  light_24 = 24
  21:   
  22:  playing = False
  23:  io.setup(pir_pin, io.IN)
  24:  io.setup(light_4, io.OUT)
  25:  io.setup(light_17, io.OUT)
  26:  io.setup(light_18, io.OUT)
  27:  io.setup(light_23, io.OUT)
  28:  io.setup(light_22, io.OUT)
  29:  io.setup(light_24, io.OUT)
  30:   
  31:  def allOff():
  32:      light(light_4, False)
  33:      light(light_17, False)
  34:      light(light_18, True)
  35:      light(light_22, False)
  36:      light(light_23, False)
  37:      light(light_24, True)
  38:   
  39:  def scene1():
  40:      print "scene 1, scream"
  41:      light(light_22, True)
  42:      playTrack("scream.mp3")
  43:      light(light_22, False)
  44:   
  45:  def scene2():
  46:      print "scene 2, baby"
  47:      light(light_17, True)
  48:      playTrack("rosie.mp3")
  49:      light(light_17, False)
  50:      time.sleep(2)
  51:   
  52:  def scene3():
  53:      print "scene 3, torture"
  54:      light(light_4, True)
  55:      playTrack("torture.mp3")
  56:      light(light_4, False)
  57:   
  58:  def endScene():    
  59:      light(light_24, True)
  60:      light(light_23, True)
  61:      playTrack("laugh.mp3")
  62:      time.sleep(2)
  63:   
  64:   
  65:  def onMovement():
  66:      print("movement!")
  67:      light(light_18, False)
  68:      scene1()
  69:      scene2()
  70:      scene3()
  71:      endScene()
  72:      allOff()
  73:   
  74:   
  75:  def light(light_pin, state):
  76:      print("light control:" + str(light_pin) + " " + str(state))
  77:      io.output(light_pin, state)
  78:   
  79:  def playTrack(track):
  80:      print("playing track:" + track)
  81:      pygame.mixer.music.load(track)
  82:      pygame.mixer.music.play()
  83:      while pygame.mixer.music.get_busy():
  84:          time.sleep(0.5)
  85:   
  86:   
  87:  allOff()
  88:   
  89:  print "ready to rock"
  90:  while True:
  91:      if not(playing):
  92:          if io.input(pir_pin):
  93:              print "starting"
  94:              playing = True
  95:              onMovement()
  96:              playing = False
  97:              print "ended"
  98:      time.sleep(300)

Action!

A video of the build in action:

          Ubuntu, SUSE, Fedora ลง Windows Store ติดตั้งแล้วใช้งานบน Windows 10 ได้เลย   

นอกจากข่าวช็อควงการอย่าง iTunes ลง Windows Store แล้ว ไมโครซอฟท์ยังประกาศข่าวว่าลินุกซ์ 3 ค่ายดังคือ Ubuntu, SUSE, Fedora ก็ลง Windows Store ด้วย

Windows 10 มีฟีเจอร์ Linux Subsystem ที่ใช้ Ubuntu อยู่แล้ว เพียงแต่ผู้ใช้ต้องติดตั้งโค้ดส่วนนี้เพิ่มเองที่มีขั้นตอนพอสมควร การเพิ่มตัวเลือกให้กดง่ายๆ บน Windows Store จึงช่วยให้ผู้ใช้สะดวกมากขึ้น

ตัว Linux Subsystem สามารถเปลี่ยนจาก Ubuntu เป็นดิสโทรอื่นได้ (เช่น SUSE) ทำให้ไมโครซอฟท์ชักชวน SUSE และ Fedora มาเป็นตัวเลือกอีกสองตัวบน Windows Store ให้ผู้ใช้เลือกดิสโทรที่ต้องการได้เลย

หมายเหตุ: ฟีเจอร์นี้คือเฉพาะตัวแกนของลินุกซ์ที่รันบน Windows Subsystem for Linux นะครับ ไม่ใช่ดิสโทรลินุกซ์ตัวเต็ม

No Description

No Description


          Extending your Home Automation network with Raspberry Pi   

 

Just thought I would share that last night I decided to throw some code together which will mash up the Raspberry Pi along with the zVirtualScenes server

In reality you could deploy the scripts to any machine capable of running python (OSX, Windows, *nix, etc..).  All communication is done via UDP using Multicast groups, so every node on the network will send to this single group, and all other nodes will receive the events.

Interesting idea.

 

Here are some screen grabs from zVirtualScenes connected to a Raspberry Pi, with a PIR motion sensor:

1. Enabling the “SensorNet” adapter

ActivateAdapter

 

2. Auto-discovery working.  The server sends a “RegisterRequest” message to the group, and each node starts to report in with their capabilities.

DeviceAutoDiscovery

 

3. All motion values are stored in zVirtualScenes server, and can be used for triggering events

RaspberryPi with zVirtualScenes

 

4. Triggering an event based on motion detected on the RPI:

TriggerBasedonRpiMotionDetection

 

 

The idea here is that you can use the Raspberry Pi (base cost of $35) to setup a sensor network, a command network, etc.. on your LAN for all of your Home Automation needs.

 

Some examples


Have your garage door wired with a Pi and a Solid State Relay Switch (http://dx.com/p/ssr-10da-solid-state-relay-white-silver-230994), and have that device/switch show up in zVirtualScenes and also to be able to easy control it.

 

sku_230994_1[1]

 


You could setup a series of cheap Sensors in your home wired to a single or many Pi's, and have each readable/actionable in zVirtualScenes.  Some examples:

Triple-axis analog accelerometer - for measuring motion and tilt
Force sensitive resistor - for sensing pressure/force
Temperature sensor - for measuring from -40 to over +125 degrees C
10K breadboard potentiometer
Hall effect sensor - for sensing a magnet
Piezo - can be used as a buzzer or a knock sensor
Ball tilt sensor - for sensing orientation
Photo cell sensor - for sensing light
IR sensor - for sensing infrared light pulsing at 38KHz
IR LED - for use with the IR sensor

FYI, you can by ALL of those sensors, for $35 right now: http://www.adafruit.com/products/176

 

basicsensors_LRG[1]


Have XBMC announce what’s playing information, and to be able to control XBMC via zVirtualScenes



Monitor and control remote OSX, Windows, Linux desktops via zVirtualScenes
   Disk SMART status, watch dog for apps & services, etc.. and trigger events in your Home Automation network based on these events.



Have a different device in your network Monitor zVirtualScenes events and respond accordingly.  This really allows us to break out of just controlling z-wave enabled devices, to being able to control ANY device on your network. 


There is really no limit of the possibilities here.

 

Show me the code!

Right now it is a bit too early to share all of my code. I do plan on opening it all up under some no-limit license, once it gets to an acceptable point for publishing.

With that said, here is the gist of getting a PIR Motion sensor on a PI on the network - it is in python, so tread lightly... :)




import GPIO 
from App import App

app = App("RPI-Upstairs");  ##give this device a unique name on your network

##Import your sensors 
from sensors.zMotionSensor import zMotionSensor as MotionSensor

##register your sensors 
pin7pir = MotionSensor("Main Entry Motion Sensor (PIR)", #unique display name for this sensor on this device 
    7,              ## GPIO port used 
    GPIO,  app )     ##just ignore these, we are injecting our dependencies


pin7pir.monitor()  #setup this sensor to start monitoring for input


app.wait()  ## setup our wait loop for our daemon, to allow for the magic to happen


 

On the zVirtualScenes side of things, all you need to do is enable the "SensorNet" adapter, and all will be recognized and registered automatically. 

ActivateAdapter


 

How can I get my own Pi?


Sample shopping cart items to get a basic PI with a motion sensor.

 

998_LRG[1]
Raspberry Pi Model B 512MB RAM - $39.95
http://www.adafruit.com/products/998

 

 

pirsensor_LRG[1]

PIR Motion Sensor - $9.95
http://www.adafruit.com/products/189

 

 

ID501_LRG[1]

***5V 1A (1000mA) USB port power supply - $5.95
http://www.adafruit.com/products/501

 

 

microusbcable_LRG[1]

***USB cable - A/MicroB - 3ft - $3.95
http://www.adafruit.com/products/592

 

SD102_MED[1]

***4GB SD Card - $7.95
http://www.adafruit.com/products/102

 

Most of these items you should be able to get locally or even cheaper from any vendor, ebay, monoprice, amazon, etc..  On adafruit check out the “Distributors” tab, for local options.

Total cost so far, from adafruit.com, is about $68, which is cheaper than any z-wave enabled Motion detector on the market today!


Once the base costs are covered for the Pi, you can add a bunch of cheap, additional sensors to this device (and thus your network).

More to come.


          ไมโครซอฟท์เปิดตัว SQL Server 2017 ตัวจริง ใช้บนลินุกซ์ได้ บน Docker ได้   

ที่งาน Build 2017 ไมโครซอฟท์เปิดตัวฐานข้อมูล SQL Server 2017 ตัวจริง หลังจากออกรุ่นพรีวิวมาตั้งแต่เดือนเมษายน

ฟีเจอร์ใหม่ที่สำคัญของ SQL Server 2017 มีสามอย่าง ได้แก่

No Description


          Mark Shuttleworth ยืนยัน Ubuntu ไม่ทิ้งเดสก์ท็อป แต่จะโฟกัสที่ Cloud/IoT มากกว่า   

หลัง Ubuntu พับแผนการด้านเดสก์ท็อป-มือถือครั้งใหญ่ ก็เกิดคำถามตามมามากมายว่าอนาคตของ Ubuntu จะเป็นอย่างไรต่อไป ล่าสุด Mark Shuttleworth ซีอีโอจึงให้สัมภาษณ์ในประเด็นนี้

Shuttleworth เล่าว่าโลกคอมพิวเตอร์ในปัจจุบันแบ่งออกเป็น 3 ขาคือ personal computing, data center/cloud และ edge/IoT ซึ่งตอนนี้ Ubuntu กลายเป็นระบบปฏิบัติการมาตรฐานสำหรับโลก data center/cloud ไปแล้ว และในโลกของ edge/IoT ก็น่าจะมีบทบาทไม่น้อย

เขายอมรับว่าวางแผนผิดพลาดไปในตลาด personal computing ที่พยายามควบรวมพีซี-มือถือ-แท็บเล็ต แต่ก็ยืนยันว่าเดสก์ท็อปยังเป็นตลาดสำคัญที่ Ubuntu จะไม่ทิ้ง อย่างไรก็ตาม ในแง่ธุรกิจแล้ว Ubuntu จะโฟกัสไปที่ตลาด data center/cloud ที่เข้มแข็ง และตลาด edge/IoT ที่กำลังอยู่ในช่วงเริ่มต้น

ที่มา - OMG Ubuntu


          และแล้วก็มีวันนี้ Fedora จะเล่นไฟล์ MP3 ได้แบบดีฟอลต์ หลังสิทธิบัตรหมดอายุ   

ผู้ใช้ลินุกซ์คงทราบกันดีว่า ดิสโทรลินุกซ์ค่ายต่างๆ ไม่สามารถเล่นไฟล์ MP3 ได้ทันทีหลังติดตั้งระบบปฏิบัติการเสร็จ แต่ต้องมาติดตั้ง codec เพิ่มเองในภายหลัง เหตุผลเป็นเพราะ MP3 มีสิทธิบัตรคุ้มครอง ถ้าระบบปฏิบัติการไหนอยากใช้งานต้องจ่ายค่าไลเซนส์ให้สถาบัน Fraunhofer เจ้าของสิทธิบัตร

แต่สิทธิบัตร MP3 หมดอายุไปแล้วในเดือนพฤศจิกายน 2016 ส่งผลให้ดิสโทรลินุกซ์บางตัวเริ่มปรับตัวกันแล้ว ฝั่งของ Fedora ก็ประกาศว่าจะเพิ่มการรองรับ MP3 (ในทางเทคนิคคือเพิ่มแพ็กเกจ gstreamer1-plugin-mpg123 เข้ามาแบบดีฟอลต์) ในอีกไม่ช้านี้ แต่ยังไม่ระบุช่วงเวลาที่แน่ชัดว่าจะเป็นเมื่อไร (น่าจะทัน Fedroa 26 ที่จะเป็นรุ่นถัดไป)

ที่มา - Fedora Magazine

No Description


          Ubuntu 17.10 วนกลับมาที่ตัว A ได้ชื่อ Artful Aardvark    

Ubuntu 17.04 ใช้โค้ดเนม Zesty Zapus ซึ่งถือว่าเดินทางมาถึงตัว Z แล้ว สิ่งที่แฟนๆ สงสัยคือ Ubuntu 17.10 รุ่นหน้าจะใช้โค้ดเนมอะไร จะวนกลับมาเป็นตัว A หรือไม่

Mark Shuttleworth ผู้นำโครงการ Ubuntu ยังไม่ประกาศข้อมูลนี้ แต่รอบนี้ข้อมูลจากระบบติดตามบั๊กของ Ubuntu โผล่มาให้เห็นแล้วว่าเป็นชื่อ Artful Aardvark

ตัวอาร์ดวาร์ก เป็นสัตว์เลี้ยงลูกด้วยนมชนิดหนึ่งที่อาศัยอยู่ในทวีปแอฟริกา ที่มาของชื่อมีความหมายว่า "หมูดิน" ในภาษาแอฟริคานส์

ที่มา - Ubuntu Launchpad, OMG Ubuntu, ภาพจาก Wikipedia

No Description


          Ubuntu 17.10 จะย้ายไปใช้ระบบแสดงผล Wayland แทน Mir ของเดิม   

จากกรณี Ubuntu เลิกใช้ Unity ก็มีคำถามตามมาถึงระบบแสดงผล (display server) ที่ถูกพัฒนาขึ้นมาใช้ร่วมกันคือ Mir

ก่อนหน้านี้ Mark Shuttleworth เคยให้ข้อมูลมารอบหนึ่งแล้วว่า Unity จะหยุดพัฒนาอย่างถาวร ส่วน Mir จะยังอยู่ต่อแต่ย้ายไปจับตลาด IoT แทนเดสก์ท็อป

ล่าสุด Will Cooke ผู้จัดการทีมเดสก์ท็อปของ Ubuntu ยืนยันแล้วว่า Ubuntu จะย้ายไปใช้ Wayland ระบบแสดงผลอีกตัวแทน

ประกาศนี้ไม่เหนือความคาดหมายนัก เพราะ Wayland ถูกใช้ในดิสโทรลินุกซ์อื่นๆ เช่น Fedora อยู่แล้ว และการเปลี่ยนแปลงน่าจะเสร็จทันใน Ubuntu 17.10 รุ่นหน้า

ที่มา - OMG Ubuntu


          Ubuntu GNOME ออกรุ่น 17.04, ประกาศทิศทาง รวมทีมเข้ากับ Ubuntu หลัก   

โครงการ Ubuntu GNOME ประกาศออกรุ่น 17.04 โดยใช้ GNOME 3.24 รุ่นล่าสุด พร้อมประกาศทิศทางในอนาคตว่าจะไปรวมกับ Ubuntu ตัวหลักแล้ว

ทีมงาน Ubuntu GNOME และ Ubuntu Desktop (Unity) จะถูกรวมเข้าเป็นทีมเดียวกัน ตอนนี้ทางทีม Ubuntu GNOME กำลังวางแผนการทำงานกับทีมฝั่ง Canonical อยู่ว่าจะทำอะไรในช่วงไหน และจะประกาศข่าวต่อไป

ผู้ที่ใช้ Ubuntu 16.04 LTS หรือ Ubuntu GNOME 16.04 LTS จะได้อัพเกรดเป็น Ubuntu 18.04 LTS ทีเดียวเลยในปีหน้า ถือเป็นจุดสิ้นสุดของการแยกดิสโทรสองสายนั่นเอง

ที่มา - Ubuntu GNOME


          Logiciels plus compactibles avec Mac & Linux   
Bonjour a vous, j'ai remarque que memes si il y a des efforts pour que de plus en plus de logiciels soient compactibles Mac & Linux, ce n'est pas toujours le cas partout, le fait est que des logiciels comme Neuf Talk par exemple ne fonctionnent que sur windows, et que les versions Mac & Linux sont enfaite d'autres logiciels d'autres compagnies qui n'ont rien a voir et qui nous empechent entre autre de profiter des 10 sms gratuits, et des appels depuis n'importe ou comme si on appelait de chez nous. AU lieu de ça sur la page de neuf talk on a deux logiciels de voip qui parfois ne sont meme pas en français. Je voudrait donc je Neuf se penche sur le probleme ! Car c'est un produit de consomation comme tant d'autres et que tout en monde n'est pas obligé de posseder un Windows chez lui. Cordialement, Luis
          Flash Player 10.2 beta out now – Stage Video rocks !   
Adobe is  happy to announce a beta release of Flash Player 10.2 for Windows, Mac, and Linux. It is now available for download on Adobe Labs. Flash Player 10.2 beta introduces a number of enhancements , including Stage Video, a new API that delivers best-in-class, high performance video playback across platforms. The new beta also […]
          my first dev envoronment on Linux   
Hello group. I already have a php web site set up and hosted by an internet service provider. I FTP all the files over the internet to the site. I...
          Hanami: un moderno framework web para Ruby   

Nuestro amigo Luis Figueroa que es un experto en programación web, nos ha recomendado que probemos y compartamos un moderno framework web para Ruby llamado Hanami que cuenta con múltiples características, una excelente portabilidad y usabilidad, además de una interfaz web que agradará a más de uno. ¿Qué es Hanami? Hanami es un framework web […]

El artículo Hanami: un moderno framework web para Ruby aparece primero en Hanami: un moderno framework web para Ruby.


          [PATCH v4 1/3] MAINTAINERS: give kmod some maintainer love   
"Luis R. Rodriguez" writes: (Summary) As suggested by Jessica, I've been actively working on kmod, so might as well reflect its maintained status.
might as well reflect its maintained status.
Changes are expected to go through akpm's tree.
Changes are expected to go through akpm's tree.
Cc: Jessica Yu <jeyu@redhat.com>
Suggested-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Luis R. 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index aab3ae5aa12c..d9f5d8687cc1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -7561,6 +7561,13 @@ F: include/linux/kmemleak.h F: mm/kmemleak.c F: mm/kmemleak-test.c +KMOD MODULE USERMODE HELPER +M: "Luis R.
          [PATCH v4 0/3] kmod: help make deterministic   
"Luis R. Rodriguez" writes: (Summary) Figured some peformance folks might be interested in that tidbit, should they wish to go play.
should they wish to go play.
With wait:
With wait:
time ./kmod.sh -t 0008
real 0m16.366s
user 0m0.883s
sys 0m8.916s
sys 0m8.916s
time ./kmod.sh -t 0009
real 0m50.803s
user 0m0.791s
sys 0m9.852s
sys 0m9.852s
With swait:
With swait:
time ./kmod.sh -t 0008
real 0m16.523s
user 0m0.879s
sys 0m8.977s
sys 0m8.977s
time ./kmod.sh -t 0009
real 0m51.258s
user 0m0.812s
sys 0m10.133s
sys 0m10.133s
If anyone wants these in a git tree you can check out the 20170628-kmod-only branch from my linux-next tree [1], based on next-20170628.
          Re: [Linux-ima-devel] [PATCH v3 0/6] Updated API for TPM 2.0 PCR e ...   
Mimi Zohar writes: (Summary) ÿ¿¿¿¿¿For TPM 2.0 the first digest algorithm in the IMA hash agile crypto header, will be used as the default digest used for truncating/padding the other unspecified banks.
the other unspecified banks.
In order not to break the existing userspace ABI, we will still need to support the existing SHA1 based IMA securityfs measurement lists, whether or not SHA1 is included in the hash agile IMA securityfs measurement lists.ÿ¿¿¿¿¿
measurement lists.ÿ¿¿¿¿¿
There's a TPM command to query TPM algorithms.
Right, tpm2_init_pcr_bank_info(), defined in Roberto's patch "tpm: introduce tpm_pcr_bank_info structure with digest_size from TPM", gets the TPM digest size and stores it in the active bank structure (tpm_pcr_bank_info).
(tpm_pcr_bank_info).
Mimi
Mimi
Mimi

          Re: [PATCH 2/2] tpm: use tpm2_pcr_read() in tpm2_do_selftest()   
Jarkko Sakkinen writes: On Fri, Jun 23, 2017 at 03:41:57PM +0200, Roberto Sassu wrote: Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkine@linux.intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkine@linux.intel.com> /Jarkko
/Jarkko
2.9.3
2.9.3

          Re: [PATCH 1/2] tpm: use tpm_buf functions in tpm2_pcr_read()   
Jarkko Sakkinen writes: On Fri, Jun 23, 2017 at 03:41:56PM +0200, Roberto Sassu wrote: Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> 2.9.3
2.9.3

          Re: [PATCH v2] tpm: Fix the ioremap() call for Braswell systems   
Jarkko Sakkinen writes: On Thu, Jun 22, 2017 at 02:32:01PM -0700, Azhar Shaikh wrote: Signed-off-by: Azhar Shaikh <azhar.shaikh@intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> 1.9.1
1.9.1

          Re: [PATCH v5] drm/sun4i: hdmi: Implement I2C adapter for A10s DDC bus   
kbuild test robot writes: (Summary) Hi Jonathan,
Hi Jonathan,
[auto build test WARNING on next-20170627]
[cannot apply to v4.12-rc7 v4.12-rc6 v4.12-rc5 v4.12-rc7] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Jonathan-Liu/drm-sun4i-hdmi-Implement-I2C-adapter-for-A10s-DDC-bus/20170629-001335 config: arm-sunxi_defconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 20160705 reproduce:
wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross
# save the attached .config to linux bu
          linux-next: unsigned commit in the libata tree   
Stephen Rothwell writes: Hi Tejun,
Hi Tejun,
I appreciate the fix, but commit
I appreciate the fix, but commit
bb6133c54b97 ("libata: fix build warning from unused goto label") bb6133c54b97 ("libata: fix build warning from unused goto label") has no Signed-off-by tag.
has no Signed-off-by tag.

          Re: linux-next: build failure after merge of the block tree   
Stephen Rothwell writes: (Summary) Hi Jens,
Hi Jens,
On Wed, 28 Jun 2017 09:11:32 -0600 Jens Axboe <axboe@kernel.dk> wrote: both a u64 put and get user.
Yes, put_user is fine (it does 2 4 byte moves. The asm is there to do the 8 byte get_user, but the surrounding C code uses an unsigned long for the destination in all cases (some other arches do the same). I don't remember why it is like that.
don't remember why it is like that.
we copy things in and out for that set of fcntls.
OK, thanks.
OK, thanks.

          Error Msg Hamsphere 3.0 (1 reply)   
I have been trying to login to Hamsphere 3.0 and it comes up with the following:

Error Msg
Module: Rx Audio driver (codec)
Fault: Cannot initiate the audio driver

I have tried /play1-4 but it still does not come up . I have not had this trouble before .
Am using Linux Ubuntu Mate 16.04.
I am also running Hamsphere 4.0 but I prefer Hamsphere 3.0 at this time because there are more people using it.

Any help to get rid of this error msg would be appreciated .

Thank you.

Mal Ellis
3D2MR
          Re: selftests/capabilities: test FAIL on linux mainline and linux- ...   
Andy Lutomirski writes: (Summary) The failure is worked around by this: diff --git a/tools/testing/selftests/capabilities/test_execve.c b/tools/testing/selftests/capabilities/test_execve.c index 10a21a958aaf..6db60889b211 100644 --- a/tools/testing/selftests/capabilities/test_execve.c +++ b/tools/testing/selftests/capabilities/test_execve.c @@ -139,8 +139,8 @@ static void chdir_to_tmpfs(void) if (chdir(cwd) != 0) err(1, "chdir to private tmpfs"); } static void copy_fromat_to(int fromfd, const char *fromname, const char *toname) I think this is due to the line:

p->mnt_ns = NULL;

in umount_tree().
          Re: [PATCH 9/9] RISC-V: Build Infastructure   
Palmer Dabbelt writes: (Summary) Oh, sorry about that -- you're right, I just missed that one as it wasn't in the original patch set.
the original patch set.
I'll squash this in when we do a v4.
I'll squash this in when we do a v4.
commit 1a262894827be647d55016da09f61eb49962dd7d
Author: Palmer Dabbelt <palmer@dabbelt.com>
Date: Wed Jun 28 14:11:17 2017 -0700
Date: Wed Jun 28 14:11:17 2017 -0700
Don't define CROSS_COMPILE in the u500 defconfig
Don't define CROSS_COMPILE in the u500 defconfig
diff --git a/arch/riscv/configs/freedom-u500_defconfig b/arch/riscv/configs/freedom-u500_defconfig index b37908d45067..b580dc5c8feb 100644 --- a/arch/riscv/configs/freedom-u500_defconfig +++ b/arch/riscv/configs/freedom-u500_defconfig @@ -1,4 +1,3 @@ -CONFIG_CROSS_COMPILE="riscv64-unknown-linux-gnu-" CONFIG_DEFAULT_HOSTNAME="ucbvax" # CONFIG_CROSS_MEMORY_ATTACH is not set # CONFIG_FHANDLE is not set

          Julia Studio 0.4.4 Released!    

The updated version of our open-source IDE is available for Windows, Mac, and Linux.


          Paragon Hard Disk Manager 15 Premium 10.1.25.1137 + BootCD   

http://img12.nnm.me/5/1/b/f/3/8cab200d6db0361af7a40567bed.jpg

Все варианты загрузочных дисков: Linux, Win 7, Win 8.1, Win 10

Paragon Hard Disk Manager 15 представляет собой интегрированный набор эффективных в работе инструментов, разработанных специально для разрешения большинства проблем, с которыми может столкнуться пользователь ПК. Его функциональность включает все аспекты жизненного цикла компьютера от выполнения практически любых операций с разделами до установки системы с нуля и обеспечения надежной защиты данных, а также безопасной утилизации жестких дисков.

Читать далее


          RAD Studio 10.2 Tokyo ya está aquí   

This week, and somme days earlier than expected, has been released the new version of our preferred programming environment, RAD Studio 10.2 Tokyo. In this new release, the news more important is, without any doubt, the possibility of compiling our applications for a new platform, in this case Linux. However, it is not intended for [...]Seguir leyendo

La entrada RAD Studio 10.2 Tokyo ya está aquí aparece primero en El blog de cadetill.


          Gestire Corsair Strafe RGB su Linux Ubuntu   
Il mio primo autoregalo del 2016 è stata una bella tastiera meccanica Corsair Strafe RGB. La masturbazione con i led RGB e gli switch cherry mx brown è arrivata proprio al top, senza contare che quando scrivo (tipo ora) è una gioia per l’udito sentirla cantare. Finiti i primi muniti erettivi intensi mi sono chiesto: […]
          Cambiare DNS su Linux Ubuntu   
Taaanto tempo fa, in una galassia lontana lontana (Cit) scrivevo su questo angolo del web tutto mio. Oggi invece scrivo in altri posti perché sostanzialmente me pagano per farlo. Ora torno a scrivere due righe su questo blog dopo un anno perché ultimamente mi sono ritrovo a non poter più visualizzare uno dei mie siti […]
          Chrome 5 for Mac and Linux out of beta   

Linux and Mac users needn’t look with envy anymore at their PC fellows who have been enjoying stable Chrome version for Windows over the past few months. After over half a year in […]

The post Chrome 5 for Mac and Linux out of beta appeared first on Geek.com.


          Push your GPU to the limits with Unigine’s free Heaven 2.0 benchmark   

You’ve treated yourself to the latest and greatest DirectX 11 card and now what? Easy, download and run Unigine’s new benchmark, crank up those settings to the max, and prepare to be amazed. […]

The post Push your GPU to the limits with Unigine’s free Heaven 2.0 benchmark appeared first on Geek.com.


          The iTablet: An iPad alternative that multitasks, has a webcam, and runs Windows 7 or Linux   

The iTablet runs Flash, has a webcam, supports multiple operating systems, provides 250GB of storage and USB ports, and runs your existing Windows software. If you’re sick and tired of reading how great […]

The post The iTablet: An iPad alternative that multitasks, has a webcam, and runs Windows 7 or Linux appeared first on Geek.com.


          An updated Google Voice extensions lets you make calls directly in Chrome   

The Google Voice extension version 2.0, posted this past Friday, embeds the Google Voice service in Mac or Linux version of Chrome. Upon installing the official Google-made extension, you can click on a […]

The post An updated Google Voice extensions lets you make calls directly in Chrome appeared first on Geek.com.


          Support The Linux Foundation with a Tux credit card   

If you are a supporter of Linux either with your time, experience, or money, there is now a way to both show your support and contribute a bit of cash through your everyday […]

The post Support The Linux Foundation with a Tux credit card appeared first on Geek.com.


          Microsoft releases 20K lines of source code to Linux kernel   

In an effort to bolster performance under Linux in its VMware competitor hypervisor software — Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V — Microsoft today release 20,000 lines of source […]

The post Microsoft releases 20K lines of source code to Linux kernel appeared first on Geek.com.


          Ksplice gives Linux users 88% of kernel updates without rebooting   

Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the […]

The post Ksplice gives Linux users 88% of kernel updates without rebooting appeared first on Geek.com.


          Opinion: Microsoft’s days as king are ending   

I’ve written opinions on this subject in recent months (here and here) so I’ll be brief with this one. With Google’s late Tuesday Google Chrome OS announcement, a new competitor to Microsoft’s Windows […]

The post Opinion: Microsoft’s days as king are ending appeared first on Geek.com.


          How to Diagnose your Flaky Internet Connection   
I have Verizon Avenue DSL, the worst ISP I've ever had, so I've gotten to learn a few tricks on how to troubleshoot my internet problems. Here are the steps:

Can you ping the outside world?

Try pinging a well-known IP address like 4.2.2.2
  • In Windows: Click Start -> Run... -> cmd -> type "ping -n 50 4.2.2.2"
  • In Linux/Mac: Open a Terminal and type "ping -c 50 4.2.2.2"
You should see successful results (0% packet loss) like that below:
If you get messages like "no route to host", or get 100% packet loss, you've got much bigger problems. (If so, try doing "ping 192.168.0.1" - if that doesn't even work, then you probably aren't even connected to your router.)

Does resetting just the router help things?

Try unplugging (waiting 20 seconds) and re-plugging the power to your router. Does that help things? If so, you might have a crappy/old/broken router. I've had 3 different Netgear/DLink routers where resetting helped things. (In fairness, 1 of those was my fault: I plugged a 12v power supply into a router that wanted 7.5v -- the plug fit, the router got really hot, and periodically reset it self.)

Does resetting the modem and router help?

Try unplugging (waiting 30 seconds) and re-plugging the power to your dsl/cable modem, and also to your wireless router. Occasionally, your modem can get stuck with a bad IP address, and this will force it to get a new one. This really shouldn't happen if you have a good ISP, but it can. But this is only something that might happen every few months or so, not every day. If doing this helps all the time, you probably have a different problem.

Is it your DNS?

If you are getting a lot of "host/server not found" errors in your browser, and/or the "looking up domain.com ..." message in the status-bar at the bottom takes a long time, the problem might be a bad DNS server.

Background on DNS:

When you plug your wireless router into your cable/dsl modem, the router is given an IP address, as well as the IP address of where to do DNS lookups. (These DNS servers are hosted by your ISP, and are often flaky/overloaded.) When you plug your computer into the router (or connect over wireless), the router tells your computer to use 192.168.0.1 (the IP address of the router) as the DNS server. Your computer thinks that your router is the DNS server, but really, your router just turns around and does the DNS lookup for you.

How to fix your DNS:

One thing you can easily try is to tell your computer to use a different DNS server. Go to opendns.org -- they have instructions on how to do this for your particular computer. Their DNS Server IP addresses are 208.67.222.222 and 208.67.220.220
(Or you can use Verizon's public DNS servers of 4.2.2.2 and 4.2.2.3, or (new) Google's DNS servers of 8.8.8.8)
Alternately, you can change the settings on your router to use these IP addresses. (It's hard to explain how to do this - you have to visit http://192.168.0.1 from a computer that is plugged directly into your router's special port.) This way, all the computers in your home will benefit from having these new DNS servers.

If using these DNS servers fixes your internet woes, then you've found your problem (and your solution).

Is your wireless connection flaky?

Try pinging 192.168.0.1 from your laptop that's connected wirelessly, and see what the packet loss is. (You really need to do 50 or 100 pings to get a fair estimate.) Ideally, the packet loss should be 0% -- if you do a ping from a computer that is plugged directly into the router that is what you'll get.
In my house, I would get packet losses of at least 3%, sometimes as high as 8% or even higher. The symptom is that the internet seemed very flaky. Sometimes web pages don't load, or take extremely long to load. Sometimes my ssh connections would lock up. If your wireless is the true cause, then all of these should be symptoms that you don't have when plugged directly into your router (all ethernet, no wireless).

How to fix your wireless connection:

I don't have a great solution here, since the problem might be that your house is just a "dead zone" as far as wireless goes. Or there might be too many other routers/microwaves/cordless phones/other interference right around you.

But here are some ideas to try:
  • try changing the "channel" of your wireless router (it's a number from 1-11) to something very different from what it was before.
  • try moving your router to a different place in the room (away from bookcases for example)
  • try upgrading the firmware of your router (a pain, I know)
  • buy a fancy new router
Update: I bought the Apple MB763LL/A AirPort Extreme Dual-band Base Station and so far everything is great - 0% packet loss, great range. It's a bit pricey, but I've decided I'm not going to skimp on productivity tools that I use every day.
          Microsoft offers Windows 7 Home Premium Upgrade pre-order for $49.99   

Are you interested in buying Windows 7? Are you prepared to pre-order it now? If so, you can save 58% on the Windows 7 Home Premium Upgrade version, getting it for $49.99. If […]

The post Microsoft offers Windows 7 Home Premium Upgrade pre-order for $49.99 appeared first on Geek.com.


          SSH keys in 2 easy steps   
These are simple instructions that will let you ssh from one Linux machine to another without needing to type your password.

Step 1) Generate your public signature

On your local machine (where you are ssh-ing from) type:
ssh-keygen
(Then hit ENTER to accept the default output file of ~/.ssh/id_rsa.pub and ENTER again twice if you're lazy and want to use a blank passphrase.) Note that you only have to generate a key once per client machine - the same public key will be used to access all servers.

Step 2) Copy your public signature to the server

Again, from your local machine, type:
cat ~/.ssh/id_rsa.pub | ssh remote_user@remote.example.com "cat >> ~/.ssh/authorized_keys"
(but replace remote_user@remote.example.com with your actual user and server.)

This fancy shell command appends the contents of your public signature to the end of the ~/.ssh/authorized_keys file on the server. (If you did a simple scp it would overwrite any previous authorized keys you've stored.)

You're done!

Next time you ssh into the server
ssh remote_user@remote.example.com
It should do this without prompting for any passwords.
          Installing djb-dns on a Linux machine.   

Down below is a script you can use to install djb-dns on a Linux system (like Ubuntu).

Specifically, it will install dnscache (a local caching nameserver) which resolves any domain name into an IP address. This is much like Google's public 8.8.8.8 DNS server.

Background on DNS lookups

To be clear: dnscache is not an "authoritative" dns server A dns cache is a simply a middle-man that executes global dns lookups on behalf of an incoming query, and caches the result for subsequent queries. See this clarification.

When a program does a dns lookup (turning a domain name into an IP, or vice versa) it uses a dns client library (e.g. calling the UNIX function gethostbyname()) to connect to a ("recursive") domain name server. That server (typically hosted by your ISP) does all the dirty work of first talking to the root-name-servers and going down the tree of DNS lookups until the full domain name is completely resolved.

The file /etc/resolv.conf contains the IP address(es) of the domain name server(s) your system is using. It is a small file that typically looks something like:

nameserver a.b.c.d
nameserver e.f.g.h

Why do I need to run my own dns cache?

The dns cache servers that your ISP is hosting typically aren't very good. Those servers are overloaded, not well maintained, etc... If you are doing a high volume of dns-lookups they won't keep up. For instsance, you are running a web crawler, or doing reverse-lookups on all the IP addresses that visit your site. Your ISP's servers will introduce latency and flakiness. I've personally dealt with 3 ISPs whose servers started returning errors because my volume was too high.

I've even run my own dns cache on my home Linux desktop because my home ISP's was so bad. (Nowadays I just use 8.8.8.8 for my home networks.)

What's so special about djb-dns?

It's rock-solid. It's written by this crazy-smart guy who knows his shit, and even has an unclaimed $1000 prize to find a security bug.

I've used it multiple times and haven't had any problems. The only downside is it's a pain-in-the-ass to install. Thankfully, I've gone through the headache for you.

The Install Script

# Must be run as root
# Also see http://hydra.geht.net/tino/howto/linux/djbdns/

#Create a /package directory:
mkdir -p /package
chmod 1755 /package

cd /package
wget http://cr.yp.to/daemontools/daemontools-0.76.tar.gz
gunzip daemontools-0.76.tar.gz
tar -xpf daemontools-0.76.tar
rm daemontools-0.76.tar
cd admin/daemontools-0.76
# Apply dumb patch to make things compile
cd src; echo gcc -O2 -include /usr/include/errno.h > conf-cc; cd ..
./package/install

cd /package
wget http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz
rm -rf ucspi-tcp-0.88
tar xfz ucspi-tcp-0.88.tar.gz
cd ucspi-tcp-0.88
# Apply dumb patch to make things compile
echo gcc -O2 -include /usr/include/errno.h > conf-cc
make
make setup check

cd /package
wget http://cr.yp.to/djbdns/djbdns-1.05.tar.gz
gunzip djbdns-1.05.tar.gz
tar -xf djbdns-1.05.tar
cd djbdns-1.05
# Apply dumb patch to make things compile
echo gcc -O2 -include /usr/include/errno.h > conf-cc
# Allow more simultaneous dns requests
sed -i -e "s/MAXUDP 200/MAXUDP 600/g" dnscache.c
make
make setup check

########## Install Users and Service directories ###########
groupadd dnscache
useradd -g dnscache dnscache
useradd -g dnscache dnslog
/usr/local/bin/dnscache-conf dnscache dnslog /var/dnscache
ln -s /var/dnscache /service

# Fix the nameservers to point to current ICANN structure 
# This assumes you have dig installed 
# Patch in the current list of root servers  
for a in a b c d e f g h i j k l m
do
  dig +short $a.root-servers.net.
done > /var/dnscache/root/servers/\@

# Increase the cache to 100MB
echo 100000000 > /service/dnscache/env/CACHESIZE
echo 104857600 > /service/dnscache/env/DATALIMIT

# Change multilog to keep more logs
echo "#!/bin/sh" > /service/dnscache/log/run
echo "exec setuidgid dnslog multilog t s10000000 ./main" >> /service/dnscache/log/run
Now all the tools and binaries are installed. To verify that the tools were installed you can do:
dnsip www.google.com
Now you just have to kick-off the dnscache server and update /etc/resolv.conf. You will want to run the following script at system startup (if you don't, the file /etc/resolv.conf might get over-written by your system):
# Must be run as root
rm -rf /etc/resolv.conf.prev
mv /etc/resolv.conf /etc/resolv.conf.prev
echo "nameserver 127.0.0.1" > /etc/resolv.conf

## init q  # (is this needed?)
/command/svscanboot &
sleep 5
svc -u /service/dnscache   # FYI: -t does a reboot
svstat /service/dnscache
svc -t /service/dnscache/log
Enjoy!
          Sonic TALK120 - All Aboard the Loveboat   
After a quick mention of the new Macs and the i7 processor, we play a burst of the them from the Loveboat in honour of PJ Tracy who's enjoying a late honeymoon on a cruise. Then we start out with a Vince Clarke interview in UK free paper Metro, a quick chat about the international public unveiling of the Roland V-Piano in Bristol, next we discuss the Energy-XT small footprint DAW for Mac/PC and Linux and what a good DAW should be. Then we come clean about our most unwise or disappointing studio purchase, then we put on our thinking caps to try and help out Portishead figure out how to sell their music in the future, the Pul(SEW)iidth felt synths and finally this weeks tumbleweed moment is instigated by Nick when we attempt to discuss the Andre Michelle AS3 flash Karplus Strong guitar synth.

          Evolution 3.6.2   
The free open-source program Evolution is an e-mail suite that is well-beloved in the Linux community
          Embedded c++ QNX Linux Developer in Atlanta   

          259: Rapidfire 86   
Dave and Chris are back together for a Rapidfire episode talking about the modern 2017 stack for front end development, Sass vs native CSS, best practices for HTML comments, Microsoft’s efforts with Linux, database cleaning, and the ongoing Atomic CSS debate. Questions 22:25 What is the best practice for HTML comments? 28:30 The Mac use […]
          How to implement password access to specific file type in Linux/Windows programmatically?   

Linux and Windows question.

  1. I do not want user to launch all ".doc" files (including those he brings in).
  2. User tries to launch a file "test.doc".
  3. The password window appears.
  4. If the password is correct, the file is opened.
    Know how?..

I read this guide:
http://www.wikihow.com/Create-a-Password-Protected-File-on-Windows-7

The problem is that I want to do it by mask, not just a few known files.


          172: With Tim Brown   
Tim Brown is the Type Manager for Adobe Typekit. He joins us this week for a deep dive into web typography. We talked about (roughly in order): Q & A: 26:44 I like the Georgia font a lot and have used it for a lot of my designs. Recently I moved to Linux and realized […]
          Linuxなどにローカル権限昇格の可能性、管理者権限取得に利用される恐れも   
「Stack Clash」と命名された脆弱性は、悪用されればメモリ破損を誘発され、任意のコードを実行される恐れもあるという。
          Windows nightly builds now available   

We began releasing nightly builds of Servo last July, and we are pleased to announce that our Windows builds are now ready for public consumption as well! They can be found on the official download page alongside links to Linux and macOS builds, and will be updated on a daily basis.

servo.org homepage in Servo

We’d like to recognize the recent efforts of jonathandturner and codec-abc, who investigated and solved the final issues that were blocking this preview. This was also made possible through previous work by vvuk, UK992, and larsbergstrom. Thank you all!

We encourage everyone on Windows to experiment with the nightly builds. This is pre-alpha software, so please file issues about anything that doesn’t work as expected!

Yahoo! New Zealand in Servo


          VAGAS ANALISTA DE SUPORTE- MARINGÁ-PR   
Administração e suporte a Desktops e notebooks, formatação, instalação de softwares; suporte e manutenção em redes / Internet; criação de novos pontos de rede; conhecimento em Firewall, administração de e-mails, servidores AD(Active Directory), servidores Linux (Redhat) VPN, VMWARE, HYPER-V, noções básicas de Microsoft Oficce. Experiência na função; Boa comunicação, postura, pró-ativo; Inglês técnico; Conhecimentos em…
          VAGAS TÉCNICO DE SUPORTE – MARINGÁ-PR   
– Administração e suporte a Desktops e notebooks, formatação, instalação de softwares; suporte e manutenção em redes / Internet; criação de novos pontos de rede; conhecimento em Firewall, administração de e-mails, servidores AD(Active Directory), servidores Linux (Redhat) VPN, VMWARE, HYPER-V, noções básicas de Microsoft Oficce. Experiência na função; Boa comunicação, postura, pró-ativo; Inglês técnico; Conhecimentos…
          Git Clone Network Shared Project in Windows   

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2015/12/02/git-clone-network-shared-project-in-windows.aspx

Not sure why I keep forgetting this, but it bites me occasionally and maybe writing it down will prevent a search that results in Linux shell non-answers. I currently use Windows 10, but this is the case for previous versions too. The problem is that I keep forgetting that git clone requires forward slashes, rather than back slashes. In normal cases, back slashes work, as in:

 

>net use \\someshare mypassword /USER:mydomain\mylogin

 

However, cloning a git repository like this:

 

>git clone \\someshare\git\someproject

 

gives me:

 

Cloning into 'someproject'...
fatal: '\someshare\git\someproject\' does not appear to be a git repository
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

 

grrrr…  <<= that’s me

 

To fix that, switch from back-slashes to forward-slashes like this:

 

>git clone //someshare/git/someproject

 

and life is good.

 

@JoeMayo


          Devices: Ubuntu Core, Yocto, Microsoft, and Tizen   
  • Ubuntu Core opens IoT possibilities for Raspberry Pi (CM3)

    Ubuntu Core running on the Raspberry Pi Compute Module 3, which is a micro-version of the Raspberry Pi 3 that slots into a standard DDR2 SODIMM connector, means developers have a route to the production and can upgrade functionality through the addition of snaps- the universal Linux application packaging format.

    Device manufacturers can also develop their own app stores, all while benefiting from the additional security of Ubuntu Core.

  • Rugged marine computer runs Linux on Skylake-U

    Avalue’s “EMS-SKLU-Marine” is an IEC EN60945 certified computer with 6th Gen Core CPUs, -20 to 60°C support, plus 2x GbE, 4x USB 3.0, M2, and mini-PCIe.

    The EMS-SKLU-Marine is designed for maritime applications such as control room or engine room, integrated bridge systems, propulsion control or safety systems, and boat entertainment systems. Avalue touts the 240 x 151 x 75mm box computer for being smaller than typical boat computers while complying with IEC EN60945 ruggedization standards.

  • Module runs Yocto Linux on 16-core 2GHz Atom C3000 SoC

    DFI’s rugged, Linux-ready “DV970” COM Express Basic Type 7 module debuts the server-class, 16-core Atom C3000, and supports 4x 10GbE-KR and 16x PCIe 3.0.

    DFI promotes the DV970 as the first COM Express Basic Type 7 module based on the Intel Atom C3000 “Denverton” SoC, but it’s the first product of any kind that we’ve seen that uses the SoC. Intel quietly announced the server class, 16-core Atom C3000 in late February, with a target of low-end storage servers, NAS appliances, and autonomous vehicles, but it has yet to publicly document the SoC. The C3000 follows other server-oriented Atom spin-offs such as the flawed, up to 8-core Atom C2000 “Rangeley” and earlier Atom D400 and D500 SoCs.

  • Why Microsoft's Snapdragon Windows 10 Cellular PC Is Walking Dead

    Intel's veiled threat to file patent infringement suit against any company emulating x86 Win32 software on ARM-based computers has probably slayed Microsoft's Cellular PC dream.

  •  

  • New BMW X3 brings latest digital and driver assistance tech, connects to your Gear S2 / S3

          Mid-Level C++/Linux Software Engineer job in Herndon, VA   
<span>Seeking a mid-level C++ Embedded Engineer for a permanent job in Herndon, VA. <br>&nbsp;<br>Job Description:<br>This is an entry to mid-level software engineer position, developing embedded software for our hub linecard product using C/C++ on a Linux platform. The position provides an excellent opportunity to work for a rapidly growing company in the technology hub of Northern Virginia, and to gain experience in full life cycle software development, learn embedded system development skills, and to advance your career with the company&#39;s growth. The ideal candidate must have a BS degree in CS or EE, plus a couple of years of experience. MS degree preferred.<br>&nbsp;<br>Required Experience:<br>&bull; 3+ years of related software development experience <br>&bull; hands-on C/C++ development experience, STL <br>&bull; OO knowledge and programming experience in C++ <br>&bull; hands-on software development experience on Linux/Unix <br>&bull; TCP/IP programming experience <br>&bull; flexibility and ability to learn and use new technologies <br>&bull; ability to work well in a team environment as well as independently and get things done <br>&nbsp;<br>Extremely beneficial:<br>&bull; experience in writing Unix shell or Python scripts <br>&bull; cross-platform Linux/Unix programming experience <br>&bull; Knowledge of Linux kernel and drivers <br>&nbsp;<br>Beneficial:<br>&bull; experience with GDB <br>&bull; experience with Git <br>&bull; network programming <br>&nbsp;<br>Education:<br>&bull; Bachelor&#39;s degree in Computer Science or similar is required <br>&bull; Master&#39;s degree in Computer Science or equivalent is preferred <br>&nbsp;<br>Interested in this C++ Embedded Engineer job in Herndon, VA? Apply here!<br></span>
          Senior Business Analyst - Patching   
Mastech is a growing company dedicated to innovation and teamwork. We are currently seeking a Senior Business Analyst – Patching for our client in the IT Services domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract To Hire position and the client is looking for someone to start immediately.

Duration: 6 Months Contract To Hire
Location: Herndon, VA/ Zip Code: 20170
Compensation: Market Rate

Role: Senior Business Analyst - Patching

Role Description: The Senior Business Analyst – Patching would need to have at least 5+ years of experience.

Required Skills:

- 4-6+ years of relevant production support and/or technical development SDLC experience, and a basic understanding of UNIX/Linux servers.
- A solid understanding of the change request process.
- Exceptional listening, verbal and written communication, and strong facilitation and interpersonal skills.
- Strong Microsoft Office skills including SharePoint and Excel (must know pivots and how to generate reports).
- An ability to adapt to change, to shift gears and change direction on projects.
- Strong research skills with an attention to detail.
- An ability to work independently and jointly in unstructured environments in a self-directed way.
- A strong ability to manage multiple activities in a deadline-oriented environment.
- A four year Bachelor?s degree in a relevant field.

Education: Bachelor's Degree
Experience: Minimum 5+ years
Relocation: No, this position will not cover relocation expenses
Travel: No
Local Preferred: Yes

Recruiter Name: Jaydeep Manna
Recruiter Phone: 877 884 8834 (Ext: 2203)

EOE
          Identity Management Security Consultant (Job #6409)   
The individual will be responsible for assisting the Federal Lead Information Systems Security Officer (ISSO) on a variety of tasks, projects, and initiatives. A well-qualified security professional will have minimum 2-3 years of hands-on experience administering, designing, and/or implementing Oracle?s Identity, Credential and Access management (ICAM) product or equivalent identity management product. As a PAISSO, the candidate will be responsible for overseeing end-to-end architecture, design, and implementation of ICAM and identity lifecycle. The candidate will also perform all tasks related to perform the Certification and Accreditation and assuring that the system is compliant with all required security controls. It is very important for this position to understand end-to-end architecture, design, and implementation of ICAM or equivalent identity product.

Key Responsibilities:

Oversee Customer?s ICAM implementation into the enterprise
Perform all tasks related to Certification and Accreditation of ICAM implementation
Coordinates all monthly scans with the SOC and others
Review monthly vulnerability scan reports and track weaknesses in POAMs as needed
Work with C3E admins to resolve weaknesses such as configurations, patches, etc.
Work closely with customers regarding the closure and / or transfer of POAMs / vulnerabilities
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Receive and approve access requests to ensure user privileges are commensurate with required duties
Develops and maintains security authorization documentation (e.g. Security Plan, Contingency Plan, Configuration management Plan, Encryption Plan, Incident Response Plan, Waivers / Exceptions, Policies and procedures Manual etc..)
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Review all proposed change requests related to system design / configuration and perform security impact analysis

Job Qualifications

Review all proposed change requests related to system design / configuration and perform security impact analysis
Minimum 3-5 years specific experience with information assurance, information security policies/procedures/standards, and compliance assessment
Must demonstrate solid technical understanding of Identity Management
5-7 years of Oracle Linux, Redhat Linux, and Oracle Cloud Technologies experience a plus
2-3 years of architecture, design, implementation and/or administration of Oracle ICAM suite or equivalent identity product experience highly desired
Experience reviewing vulnerability scans such as Nessus and AppDetective
Ability to communicate effectively, both written and oral, with senior officials and also with both technical and non-technical audiences
Ability to organize and plan effectively, prioritize taskings with management, and use time effectively
Must possess excellent customer service attitude and demonstrate strong problem solving and troubleshooting skills on a daily basis
Ability to work additional hours as required, respond well under pressure, and be a team player with a "can do" attitude at all times
Familiarity with NIST 800-53 standardsBachelor's degree from an accredited university. Degree in Information Systems, Computer Science, Computer Engineering, Information Security, or Information Assurance strongly preferred, but not required if work experience reflects a career in this field
CISSP certification (or willing to attain a CISSP within 6 months of employment)
          Senior Java Developer with Web UI(DC) (Job #6041)   
Using modern Open Source Java, JavaScript frameworks in a continuous integration environment, you?ll join our team of developers building the next-generation of customer engagement systems for a federal agency.
Responsibilities:
• Work with product teams and product owners to define and develop UI requirements for large internet-facing, enterprise software applications
• Drawing on components from the project?s open-source framework, use JavaScript libraries (AngularJS, Bootstrap, jQuery), HTML5, CSS3 to design, build and test compelling web applications
• Develop test automation solutions utilizing state-of-the-art open source testing frameworks;
• Perform automated unit, integration, functional and behavior-driven testing of Web UIs and backend services.
• Experience working in a fast-paced, client-facing environment with changing requirements
• Adhere to standards and best practices
• Constantly seek opportunities to learn and improve team processes
• Initiate and conduct manual/automated code reviews
• Participate in Agile development process
Requirements
• 3 – 7 years of recent Java development experience with emphasis on web UI
• Experience with consuming RESTful and/or Web Services, JSON
• Knowledge of Spring Framework and Hibernate is highly desirable
• Experience with Bootstrap.js and AngularJS is highly desirable
• Experience with Section 508 Compliance
• Understanding of HTTP/S and related protocols
• Hand-on experience using version control system (Git preferably) and build automation systems
• Demonstrated knowledge and hands-on experience with Linux/UNIX operating systems
• Knowledge of and experience with agile software development methodologies
Must have skills:
Java, JUnit
CSS3
HTML5
XML
JavaScript frameworks
jQuery
Ajax

Education:
•Bachelor?s Degree in a relevant discipline is desired
Clearance:
•United States Citizenship and the ability to obtain and maintain a Public Trust Clearance is required
          Project Manager - Technical -   
This Project Manager - Technical:

? Great Pay to $120K

Position: Project Manager Technical

Location: Newport News, VA 23601

Direct-hire: Full time Perm Position

Rate: Open


SUMMARY

The Program Manager shall be the point of contact with the Contracting Officer, and/or his or her representative, and shall coordinate all of the work ordered by task orders.


PRIMARY FUNCTIONS

? The PM is responsible for day-to-day management of overall contract support operations; this can involve multiple projects and groups of personnel at multiple locations.
? Organize, direct, and coordinate the planning and production of all contract support activities.


QUALIFICATIONS

? 10 or more years of experience involving design, development, and delivery of technical training and training products and the direct supervision of technical personnel involved in the life cycle management support of complex systems.

Skills: PHP, MS SQL, Linux, HTML, Lamp, training, PMP, MS Project, Visio, SDLC


TYPE OF TECHNOLOGY USED

? LAMP environment
? PHP or a similar language
? SQL
? MySQL
? Linux and Apache
? HTML and Javascript is a plus

* Must be Clearable- Clearance is required for this position

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Senior Solutions Architect (Job #6334)   
We are seeking a driven individual who has a desire to work with some of the best Cloud Architects in the Market. Our architects support the full life cycle from client needs analysis, through design, build migration and go live. They are continually assessing emerging products and services that improve the business of the clients we serve.

The Senior Solution Architect- a combination of a subject matter guru, trusted advisor and cloud services authority, will be responsible, working as part of our cloud team, to help define the tools, processes and methods, and develop systems that will utilize Private and Public clouds (AWS, Azure, and others) to support enterprise adoption of our managed cloud service offerings. In addition, the Senior Solutions Architect will use systems analysis techniques, enterprise architecture planning process and procedures while consulting with clients to determine cloud computing specifications and needs for our clients.



Responsibilities also include:

Design, and implementation of customized technical virtualization solutions for clients both on premises and public cloud environments.
Develop complete virtualization solutions taking into account sizing, infrastructure, data protection, disaster recovery, and application requirements and hybrid access to premises based enterprise systems.
Creation of a bill of materials for solution specific virtualization product and service requirements
Provide cloud focused demonstrations, evaluations and proof of concept support
Contribute to the development of Statements of Work and Proposals
Provide troubleshooting expertise for virtualization performance and other issues
Develop AWS, Azure cloud solutions for clients
Managed and preform V2C, P2C, and P2V migrations of systems for clients.
Integrate appropriate security and information assurance controls into solutions to ensure accreditation based on appropriate regulatory requirements that the client operates under. Support Cloud Assured leadership in continuous improvement to of our solutions to maintain competitive advantages.
Assist with RFP responses


Required Skills

Cloud Architecting / Solution Delivery expertise preferably AWS services
Demonstrable Enterprise Architecture planning and design skills
Experience with scripting and automation tools such as Puppet and Chef
Good written and verbal communication skills
Good analytic, organization, presentation, customer service and facilitation skills
Excellent problem solving skills; the ability to manage multiple tasks under tight deadlines.



The candidate should have a balance of knowledge in Operating systems, Network, Security and programming or scripting

The candidate must have working experience in .NET, or similar programming or scripting languages
Ability to gather customer requirements and translate those requirements into short and long term
The candidate should have in depth understanding of authentication and authorization services; e.g.; AD, ADFS, SAML, LDAP & the backend associated APIs that connect various messaging, cloud and social platforms.
Ability to understand and support network, system and application security hardening, penetration testing, continuous monitoring, and support of policy.
Ability to support network and system risk assessment and to support network and system security planning and documentation.



Preference will be given to candidates that also have database, applications services support skills:

The candidate can be considered expert in one of more platforms (Microsoft, or a flavor of Linux) with 5 or more years? experience
The candidate can be considered expert in a combination of database and platform application services, e.g; Microsoft Exchange, SharePoint, Lync, Apache software, SQL, Oracle. MYSQL, AWS RDS, Mongo. Hadoop/EMR experience is a plus.
Full understanding of DNS, WINS, DHCP and LDAP compliant directories is also a differentiator of candidates.
Cisco network routing and switching skills
System Monitoring Tools that can be easily extended into AWS, Azure and premise based systems for a single enterprise view is a plus.

Required Experience

5+ years? experience in engineering enterprise datacenter solutions
Candidates must be a US Citizen
          Mobility Security Engineer (Job #6410)   
A well-qualified security professional will have minimum 2-3 years of hands-on experience administering, designing, and/or implementing Mobile Device Management systems.

Key Responsibilities:

Oversee Customer?s MDM design, implementation and integration into the enterprise continuous integration environment
Review all proposed change requests related to system design / configuration and perform security impact analysis
Perform all tasks related to Certification and Accreditation of MDM and App Store implementation
Coordinates all monthly scans with the SOC and others
Review monthly vulnerability scan reports and track weaknesses in POAMs as needed
Work with System Administrators to resolve weaknesses such as configurations, patches, etc.
Work closely with customers regarding the closure and / or transfer of POAMs / vulnerabilities
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Receive and approve access requests to ensure user privileges are commensurate with required duties
Develops and maintains security authorization documentation (e.g. Security Plan, Contingency Plan, Configuration management Plan, Encryption Plan, Incident Response Plan, Waivers / Exceptions, Policies and procedures Manual etc..)
Help define baselines for penetration testing.
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Review all proposed change requests related to system design / configuration and perform security impact analysis

Job Qualifications

Minimum 2-4 years specific experience with information assurance, information security policies/procedures/standards, and compliance assessment
Must demonstrate solid technical understanding of iOS and/or Android mobile application development
3-5 years of Oracle Linux, Redhat Linux, and Oracle Cloud Technologies experience a plus
2-3 years of supporting architecture, design, implementation and/or administration of MDM product such as AirWatch or GOOD Dynamics or equivalent product is highly desired
2-3 years of supporting architecture, design, implementation of Mobile App Store for Android and / or iOS highly desired
Experience reviewing vulnerability scans such as Nessus and AppDetective
Ability to communicate effectively, both written and oral, with senior officials and also with both technical and non-technical audiences
Ability to organize and plan effectively, prioritize taskings with management, and use time effectively
must possess excellent customer service attitude and demonstrate strong problem solving and troubleshooting skills on a daily basis
Ability to work additional hours as required, respond well under pressure, and be a team player with a "can do" attitude at all times
Familiarity with NIST 800-53 standards
Bachelor's degree from an accredited university. Degree in Information Systems, Computer Science, Computer Engineering, Information Security, or Information Assurance strongly preferred, but not required if work experience reflects a career in this field
CISSP or ability to successfully attain it within 6 months of employment
          Senior Linux System Administrator (Job #6385)   
Our Senior Systems Administrator practices a hybrid of System Administration, Cloud Engineering, Configuration/Release Management, Systems integration, and Software Development.

Responsibilities
• Implement and manage multi-server deployments in a virtual environment, such as AWS or Rackspace Cloud.
• Collaborate with development teams to deploy new code to production environments.
• Set up and maintain monitoring systems and alarms for production environments.
• Set up back-up mechanisms for production environment and test disaster recovery mechanisms.
• Collaborate with Operations and Development teams to evolve operations best practices.
• Participate in 24/7 on-call schedules with team members

Qualifications
• 5-7 years? experience combining Development and Operational Administration.
• In-depth knowledge of an infrastructure automation framework, such as Chef, Puppet, or Ansible.
• Strong Linux System Administration experience – specifically for web applications.
• Proficiency in scripting languages and shells – Perl, Python, Ruby, PHP, and shell scripting.
• Experience setting up and managing MySQL Databases.
• Experience setting up and managing Tomcat, Apache, and Solr.
• Ability to articulate concepts/ideas and work effectively with others under pressure.
• Incorporate commitments, strategic objectives, and day-to-day tasks seamlessly.
• High level of problem solving and conflict resolution capabilities.

Desired Experience
• Knowledge and experience of typical build and release management processes, including continuous integration packages
such as Hudson or Jenkins.
• Knowledge and experience of operational monitoring systems, such as Hyperic or Nagios.
• Development experience with Java in a web application setting – BIG PLUS.
• Exposure to Cloud Engineering - BIG PLUS.
Education and/or Experience
• BS in Computer Sciences or equivalent experience required.
          Java Developer (Job #6358)   
Previous experience working in software engineering and large-scale implementations of statistical methods to build decision support or recommender systems will enable you for this role. You will need to be innovative and entrepreneurial to work within a start-up like environment.

Responsibilities

Work with large and complex data sets to solve difficult and non-routine problems
Develop analytic models and work closely with cross-functional teams and people to integrate solutions
Drive the collection of data and refinement from multiple high-volume data sources
Research methods to improve statistical inferences of variables across models


Job Qualifications

3+ years of relevant work experience
Experience programming with Java

Experience working with machine learning and distributed computing tools like Hadoop is preferred
Excellent interpersonal and communication skills
Excellent debugging and testing skills, and likes to quickly learn new technologies
BS in Computer Science, Statistics or equivalent practical experience
Large scale systems (billons of records) design and development with knowledge of UNIX/Linux
Strong sense of passion, teamwork and responsibility
          Network Engineer (Job #6364)   
SLAIT Consulting is currently seeking Network Engineers for our client in the Newport News, VA area.

Summary:
The Network Engineers are needed for the implementation and deployment of an enterprise level Network overhaul project.

Job Duties:
* Coordinate implementation with 1300 branches.
* Streamline procedure.
* Support all hardware, systems, and software for networks.
* Install, configure, and maintain network services, equipment, and devices.
* Perform troubleshooting analysis of servers and associated systems.
* Requires thorough knowledge of networking essentials.
* Oversee software and network security.
* Strong analytical abilities and professional office experience required.
* Candidates will be working by phone with all the remote locations.

Technical Skills and Requirements for Network Engineer:
* Certifications: MCSE, CCNA, CCNP, CCIE, CNE
* Systems: Windows, Cisco Systems, UNIX, Linux, Novell
* Networking: Switches, Routers, Hubs, Servers, Cables, Racks, Firewalls, LAN, WAN, TCP/IP, DNS, UDP, Latency, VoIP, QoS, EIGRP, BGP, OSPF, NHRP, ATM, PPP, MPLS

Other Technical Skills Required:
Experience with Windows, Cisco Systems, 2900 series routers, DMVPN, MPLS Internet Breakout, Guest Wi-Fi, Switching, LAN, WAN, TCP/IP, DNS.


Why SLAIT?
We have been growing since 1990 with offices in Virginia, Gaithersburg, MD., New York, Raleigh, NC, and Austin TX. For over twenty three years, we have delivered customized, creative IT solutions for customers in the commercial, and state and local government sectors.
*Staff Augmentation *Managed Services *IT Outsourcing *IT Consulting

Thank you for your consideration, please submit your resume today! Visit us at www.slaitconsulting.com

**Must be able to work for any employer in the United States. No Visa sponsorship.**

SLAIT Consulting is an Equal Opportunity Employer
          MongoDB DBA/Architect (Job #6368)   
SLAIT Consulting is currently seeking a MongoDB DBA/Architect for our client in the Norfolk, VA area.

Summary:
The incumbent will participate in technical research and development to enable continuing innovation for the Client?s online business services, while ensuring that all database software systems and related procedures adhere to organizational values.

Responsibilities:
* Support existing database systems, including traditional RDBMS such as Oracle/MySQL and more modern, document-oriented, data stores of Apache Solr and MongoDB.
* Responsible for the installation, configuration, operation, and maintenance of all database software and related infrastructure.
* Work closely with management and other members of the team to both implement and ensure timely completion of goals and deliverables.
* Stay current with emerging technologies and procedures and make opportunities to integrate them into operations and activities.
* Research and recommend innovative, and where possible, automated approaches for database administration tasks.
* Identify approaches that leverage our resources and provide economies of scale.

Required Skills:
* Bachelor?s degree with a technical major such as engineering or computer science.
* 2+ years of general database administration experience focused on any modern RDBMS or NoSQL database software.
* 2+ years of experience with Linux based operating systems.
* Experience with Perl, JavaScript, and/or shell programming a plus.


Why SLAIT?
We have been growing since 1990 with offices in Virginia, Gaithersburg, MD., New York, Raleigh, NC, and Austin TX. For over twenty three years, we have delivered customized, creative IT solutions for customers in the commercial, and state and local government sectors.
*Staff Augmentation *Managed Services *IT Outsourcing *IT Consulting

Thank you for your consideration, please submit your resume today! Visit us at www.slaitconsulting.com

**Must be able to work for any employer in the United States. No Visa sponsorship.**

SLAIT Consulting is an Equal Opportunity Employer
          Mobility Security Engineer (Job #6410)   
A well-qualified security professional will have minimum 2-3 years of hands-on experience administering, designing, and/or implementing Mobile Device Management systems.

Key Responsibilities:

Oversee Customer?s MDM design, implementation and integration into the enterprise continuous integration environment
Review all proposed change requests related to system design / configuration and perform security impact analysis
Perform all tasks related to Certification and Accreditation of MDM and App Store implementation
Coordinates all monthly scans with the SOC and others
Review monthly vulnerability scan reports and track weaknesses in POAMs as needed
Work with System Administrators to resolve weaknesses such as configurations, patches, etc.
Work closely with customers regarding the closure and / or transfer of POAMs / vulnerabilities
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Receive and approve access requests to ensure user privileges are commensurate with required duties
Develops and maintains security authorization documentation (e.g. Security Plan, Contingency Plan, Configuration management Plan, Encryption Plan, Incident Response Plan, Waivers / Exceptions, Policies and procedures Manual etc..)
Help define baselines for penetration testing.
Review System Configurations to ensure they are in accordance with DHS hardening guidelines
Review all proposed change requests related to system design / configuration and perform security impact analysis

Job Qualifications

Minimum 2-4 years specific experience with information assurance, information security policies/procedures/standards, and compliance assessment
Must demonstrate solid technical understanding of iOS and/or Android mobile application development
3-5 years of Oracle Linux, Redhat Linux, and Oracle Cloud Technologies experience a plus
2-3 years of supporting architecture, design, implementation and/or administration of MDM product such as AirWatch or GOOD Dynamics or equivalent product is highly desired
2-3 years of supporting architecture, design, implementation of Mobile App Store for Android and / or iOS highly desired
Experience reviewing vulnerability scans such as Nessus and AppDetective
Ability to communicate effectively, both written and oral, with senior officials and also with both technical and non-technical audiences
Ability to organize and plan effectively, prioritize taskings with management, and use time effectively
must possess excellent customer service attitude and demonstrate strong problem solving and troubleshooting skills on a daily basis
Ability to work additional hours as required, respond well under pressure, and be a team player with a "can do" attitude at all times
Familiarity with NIST 800-53 standards
Bachelor's degree from an accredited university. Degree in Information Systems, Computer Science, Computer Engineering, Information Security, or Information Assurance strongly preferred, but not required if work experience reflects a career in this field
CISSP or ability to successfully attain it within 6 months of employment
          RHEL Linux Administrator Contract Job in Reston VA 22096   
<span>&nbsp;<br><B>RHEL Linux Administrator-Position Responsibility:</B><br>Provide RHEL Linux Systems Administration in a large environment/automation tools<br>&nbsp;<br><B>Required Experience</B><br>&bull;Experience in any of the following &ndash; Datastax, Hadoop, Cassandra, Solr <br>&bull;IBM Cognos <br>&bull;Secure file transfer technologies <br>&bull;SAS implementations <br>&bull;Amazon Web Services Cloud implementations <br>&bull;Performance testing tools, Jmeter, Jenkins, Webload etc. <br>&bull;Experience with the Oracle suite including Weblogic Application Server<br>&bull;Experience with Apache web servers<br><B>&nbsp;</B><br><B>Bonus Skills:</B><br>&bull;Experience in any of the following &ndash; Datastax, Hadoop, Cassandra, Solr <br>&bull;IBM Cognos <br>&bull;Secure file transfer technologies <br>&bull;SAS implementations <br>&bull;Amazon Web Services Cloud implementations <br>&nbsp;<br><B>Preferred Education/Certifications</B>:<br>&bull;RHEL, AWS, Apache <br>&nbsp;<br><B>Please respond by clicking link and sending word resume for this RHEL Linux contract job in Reston VA 22096</B><br>&nbsp;<br></span>
          Middleware/Operations Engineer (Job #6450)   
Middleware/Operations Engineer
The Middleware/Operations Engineer is a mid-senior level role that will be
a part of our Infrastructure Engineering team. This resource should be
able to quickly understand technology concepts and work independently.

Demonstrated Experience with:
Middleware Infrastructure (Apache/WebLogic) Linux/Unix, and VMware
Setting up environments / infrastructure

Experience with the following is a plus but not required:
Adobe Experience Manager (formerly CQ5)
Oracle DB Administration
Active Directory
Oracle Access Manager
EBS Admin
SiteMinder
          Senior C++ Software Engineer job in Herndon, VA   
<span>Seeking a Senior C++ Software Engineer for a permanent job in Herndon, VA. <br>&nbsp;<br>We are seeking an innovative and creative Senior Software Engineer who is ready for the challenges, responsibilities, and rewards that come with working in a high-energy, fast-paced environment. Candidates must have a strong technical background and be capable of coming up to speed on new technologies quickly. Good communication skills, great problem solving skills, and the ability to work both individually and collaboratively in a team environment are required. If you enjoy working in a fast-paced environment with the smartest team, and the very latest technology, then this is the job for you!<br>&nbsp;<br>KEY JOB RESPONSIBILITIES:<br>&bull; Architect, design, develop, test and integrate company software. <br>&bull; Participate in code reviews and improve software quality. <br>&bull; Mentor or help junior members of the team. <br>&bull; Support customers as needed <br>&bull; Other duties as assigned. <br>&nbsp;<br>Required Experience: <br>&nbsp;<br>Education:<br>BS/MS in Computer Science, Electrical Engineering or Mathematics, or equivalent experience<br>Experience: Over 7 years of industry experience in software engineering.<br>&nbsp;<br>Professional Qualities:<br>&bull; Must be able to work in a fast paced development environment. <br>&bull; Must be able to analyze and solve technical problems. <br>&nbsp;<br>Personal Qualities:<br>&bull; Must have strong interpersonal skills and be self-motivated. <br>&bull; Must be able to complete tasks in a timely manner. <br>&bull; Must be able to communicate (oral/written) effectively. <br>&bull; Must be able to work under pressure. <br>&nbsp;<br>Technical Requirements: Protocol &amp; Core Software Team (PCS):<br>&bull; Solid experience with C++ and object oriented design and development. <br>&bull; Strong experience with Inter Process communications. <br>&bull; Strong Knowledge of TCP/IP, UDP, sockets, VOIP etc. <br>&bull; Very good knowledge of Design Patterns. <br>&bull; Strong Knowledge of Linux or a POSIX O/S environment. <br>&bull; Good Experience with shell scripting. <br>&bull; Knowledge of Satellite communication is a plus. <br>&bull; Knowledge of Cryptographic concepts (Symmetric Key, Assymetric Key, IPsec, TLS) is a plus. <br>Understanding of IP routing is a plus.<br>&nbsp;<br>Interested in this Senior C++ Software Engineer job in Herndon, VA? Apply here!<br></span>
          Technical Support Engineer(13-jessi) - The Technical Support Engineer Will Support Post-s   
This Technical Support Engineer(13-jessi) Position Features:
? The Technical Support Engineer Will Support Post-s
? Requirements: ? 2- 3 Years As A Post Sales/support
? ? familiarity With Web Security Products And Proto
? Great Pay to $80K

Immediate need for technical support engineer(13-jessi) seeking the technical support engineer will support post-s, requirements: ? 2- 3 years as a post sales/support and ? familiarity with web security products and proto. ? linux administration, mcse ? an advantage., background check required. and looking for both location: california and va. will be keys to success in this growing, dynamic, stable organization. Will be responsible for ? experience with service providers such as :web h, ? service oriented. willing to travel.willing and and education: ? bachelor?s degree in computer science for Computer Hardware company. Great benefits. Apply for this great position as a technical support engineer(13-jessi) today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Project Manager - Technical -   
This Project Manager - Technical:

? Great Pay to $120K

Position: Project Manager Technical

Location: Newport News, VA 23601

Direct-hire: Full time Perm Position

Rate: Open


SUMMARY

The Program Manager shall be the point of contact with the Contracting Officer, and/or his or her representative, and shall coordinate all of the work ordered by task orders.


PRIMARY FUNCTIONS

? The PM is responsible for day-to-day management of overall contract support operations; this can involve multiple projects and groups of personnel at multiple locations.
? Organize, direct, and coordinate the planning and production of all contract support activities.


QUALIFICATIONS

? 10 or more years of experience involving design, development, and delivery of technical training and training products and the direct supervision of technical personnel involved in the life cycle management support of complex systems.

Skills: PHP, MS SQL, Linux, HTML, Lamp, training, PMP, MS Project, Visio, SDLC


TYPE OF TECHNOLOGY USED

? LAMP environment
? PHP or a similar language
? SQL
? MySQL
? Linux and Apache
? HTML and Javascript is a plus

* Must be Clearable- Clearance is required for this position

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Unix/Linux Engineer (Job #6325)   
SLAIT Consulting, a National IT Consulting Firm headquartered in VA Beach, is currently seeking Linux/Unix Engineers for contract opportunities with our direct client in Richmond, VA. The UNIX/Linux System Engineers should be capable of using established processes and tools to support a mid-size to large IT environment. He/She should be able to evaluate data and develop plans for timely completion of assigned tasks, and provide technical guidance to peers and business owners.

The ideal candidate is mid-level to senior-level UNIX/Linux engineer possessing a broad knowledge of various UNIX-based operating systems with emphasis on Linux (Red Hat). Strong knowledge of scripting to enhance daily administration tasks on standalone and clustered platforms, in multiple-tier network. Candidate must have experience in implementation, maintenance, and troubleshooting HBA/SAN connections. Candidate should have knowledge of Solaris, and HP-UX operating systems. Candidate will provide recommendations for improving existing systems, knowledge transfer of expertise, and assist with staff development.

Preferred Qualifications:
Candidates with these desired skills will be given preferential consideration:
• Red Hat and other versions of the Linux Operating System
• Experience with patching Linux
• Experience with HBA/SAN implementation
• Basic understanding of database and application functions
• Comprehensive knowledge of network technologies, i.e., Switch, Firewall, Router
• Strong troubleshooting methodology with ability for creative problem-solving
• One or more industry recognized IT certifications
• Understanding of the ITIL Framework
• Experience with IT infrastructure integration/consolidation projects
• Industry recognized certifications are preferred.

*# of Years Experience: 7

Educational Requirements:
B.S. in Computer Science, Information Technology or other job experience

If you would like to learn more about this and other opportunities we have at SLAIT please email Jessie Urbanski at jessie.urbanski@slaitconsulting.com!

SLAIT Consulting has long standing relationships for nearly 25 years with the majority of firms throughout Richmond and Hampton Roads. Work with our team of experienced recruiters who can supply a variety of opportunities so that you may achieve your technical and career goals.

SLAIT Consulting
4405 Cox Road
Suite 100
Glen Allen, VA 23060
Main: (804) 270-4900
Fax: (804) 279-1091

SLAIT Consulting is an Information Technology consulting services company that specializes in delivering customized, creative IT solutions for customers in the commercial and public sector. Since 1990, SLAIT has been listening to our clients? needs and on-boarding skilled IT professionals to assist our clients with their toughest challenges. SLAIT?s service offerings span our clients? requirements to consume services through models such as staffing, fixed service deliverables and projects, managed services, or outsourcing of projects and functions. SLAIT provides complete life-cycle services for your data-center-centric requirements specializing in converged infrastructure, storage systems, virtualization, private-cloud and hybrid-cloud. SLAIT has also developed unique solutions and services integrating wireless-access, security appliances, as well as bring-your-own-device business solutions for corporate and education environments. SLAIT is a privately owned firm, with a staff of over 400 professional employees serving clients along the East Coast and West to Texas. SLAIT works with our clients to take new approaches to reducing costs, increasing performance and mitigating risks. SLAIT is headquartered in Virginia Beach, VA, with regional branch offices in Richmond, VA; Gaithersburg, MD; Raleigh, NC; New York, NY; and Austin, TX. SLAIT Consulting is an EOE
          Senior C++/PostgreSQL or MySQL Software Engineer job in Herndon, VA   
<span>Seeking a Senior C++ Software Engineer for a permanent job in Herndon, VA. <br>&nbsp;<br>We are seeking an innovative and creative Senior Software Engineer who is ready for the challenges, responsibilities, and rewards that come with working in a high-energy, fast-paced environment. Candidates must have a strong technical background and be capable of coming up to speed on new technologies quickly. Good communication skills, great problem solving skills, and the ability to work both individually and collaboratively in a team environment are required. If you enjoy working in a fast-paced environment with the smartest team, and the very latest technology, then this is the job for you!<br>&nbsp;<br>KEY JOB RESPONSIBILITIES:<br>&bull; Architect, design, develop, test and integrate company software. <br>&bull; Participate in code reviews and improve software quality. <br>&bull; Mentor or help junior members of the team. <br>&bull; Support customers as needed <br>&bull; Other duties as assigned. <br>&nbsp;<br>&nbsp;<br>QUALIFFICATIONS: <br>Education: BS/MS in Computer Science, Electrical Engineering or Mathematics, or equivalent experience<br>Experience: Over 7 years of industry experience in software engineering.<br>&nbsp;<br>Professional Qualities:<br>&bull; Must be able to work in a fast paced development environment. <br>&bull; Must be able to analyze and solve technical problems. <br>&nbsp;<br>Personal Qualities:<br>&bull; Must have strong interpersonal skills and be self-motivated. <br>&bull; Must be able to complete tasks in a timely manner. <br>&bull; Must be able to communicate (oral/written) effectively. <br>&bull; Must be able to work under pressure. <br>&nbsp;<br>Technical Requirements <br>&bull; Solid experience with C++ and object oriented design and development. <br>&bull; Strong experience with Inter Process communications. <br>&bull; Strong Knowledge of TCP/IP, UDP, sockets <br>&bull; Very good knowledge of Design Patterns. <br>&bull; Strong Knowledge of Linux or a POSIX O/S environment. <br>&bull; Good Experience with shell scripting. <br>&bull; Experience with a relational database, such as MySQL or PostgreSQL. <br>&bull; Knowledge of Satellite communication is a plus. <br>&bull; Knowledge of Cryptographic concepts (Symmetric Key, Assymetric Key, IPsec, TLS) is a plus <br>&bull; Understanding of IP routing is a plus. <br>&nbsp;<br>Interested in this Senior C++ Software Engineer job in Herndon, VA? Apply here!<br>&nbsp;<br></span>
          Software Build/Configuration Management-Contract to Hire Job in Herndon VA 20170   
<span><B>Position Responsibilities:</B><br>&bull;Configuration Management/Software Build Specialist will be responsible for developing and maintaining the Software Configuration Management plan and policies, configuration item identification<br>&bull;software application build management, develop and maintain branching strategies, provide configuration status accounting, and ensure configuration management procedures are followed.<br>&bull;This position will perform configuration management audits and support quality assurance in validation of product artifacts and deliverables. <br>&bull;The Software Configuration Management Specialist will perform product builds and create &amp; maintain build scripts. <br>&bull;You must be team oriented and contribute to the overall project objectives and configuration management functions.<br><B>Position Requirements</B><br>&bull;Education: Bachelor&rsquo;s Degree in Engineering or a Natural Science<br>&bull;3+ years of experience as a software engineer, developing embedded, web-based or server applications with an understanding of the full development lifecycle activities.<br>&bull;3+ years of experience in building and releasing software applications in a controlled environment with an understanding of full lifecycle configuration management activities, while performing related system administration activities.<br>&bull;Strong knowledge of software development, build and configuration management tools. Linux, Windows, Microsoft WORD, EXCEL, PowerPoint, MS Project, Visio, Jira<br><B>Strongly Desired Skills</B>:<br>&bull;Solid understanding and experience with version control systems, change management systems and documentation management systems; such as GIT, CVS, Subversion, Mercurial, Perforce, ClearCase, Jira, Bugzilla, Crucible, Fisheye, Alfresco<br>&bull;Experience working within and understanding an Open Source consumer model with knowledge of GPL<br>&bull;Strong experience with Linux and related administrative activities<br>&bull;Experience in an Agile development environment and Continuous Integration<br>&bull;Knowledge of build automation, as well as experience with proven CI systems such as Jenkins, Bamboo, Maven, or Cruise Control<br>&bull;Good understanding and experience with scripting such as bash / Perl / Python<br>&bull;Experience with SQL, especially MySQL and PostgreSQL<br>&bull;Experience with compilers and cross compiling, esp. gcc, and the Altera tool suite<br>&bull;Experience with Red Hat based-Linux, 32-bit and 64-bit; RPM spec file creation and administration; managing Red Hat based package repositories <br>&bull;Experience with creating kickstart files from scratch<br>&bull;Experience administering version control systems, especially modern distributed systems such as Git.<br>&nbsp;<br><B>Please click on link and send Word updated resumes for this contract to hire Software Build/Configuration Management Specialist in Herndon VA.</B><br>&nbsp;<br></span>
          Senior Linux System Administrator (Job #6385)   
Our Senior Systems Administrator practices a hybrid of System Administration, Cloud Engineering, Configuration/Release Management, Systems integration, and Software Development.

Responsibilities
• Implement and manage multi-server deployments in a virtual environment, such as AWS or Rackspace Cloud.
• Collaborate with development teams to deploy new code to production environments.
• Set up and maintain monitoring systems and alarms for production environments.
• Set up back-up mechanisms for production environment and test disaster recovery mechanisms.
• Collaborate with Operations and Development teams to evolve operations best practices.
• Participate in 24/7 on-call schedules with team members

Qualifications
• 5-7 years? experience combining Development and Operational Administration.
• In-depth knowledge of an infrastructure automation framework, such as Chef, Puppet, or Ansible.
• Strong Linux System Administration experience – specifically for web applications.
• Proficiency in scripting languages and shells – Perl, Python, Ruby, PHP, and shell scripting.
• Experience setting up and managing MySQ