Reply to Fog IP Address change on Tue, 08 Dec 2015 05:44:36 GMT   

It reinstalled okay by using Tom’s settings, sorry Ubuntu 14.04 and fog Version: 5666
When I try to click on the Log Viewer from the web interface I am still receiving an error

Fatal error: Uncaught exception ‘Exception’ with message ‘FOGFTP: Login failed. Host: 192.168.158.10, Username: fog, Password: Uwzuif0chcH5, Error: ftp_login(): Login incorrect.’ in /var/www/html/fog/lib/fog/fogftp.class.php:29 Stack trace: #0 /var/www/html/fog/lib/pages/fogconfigurationpage.class.php(635): FOGFTP->connect() #1 [internal function]: FOGConfigurationPage->log() #2 /var/www/html/fog/lib/fog/fogpagemanager.class.php(67): call_user_func(Array) #3 /var/www/html/fog/management/index.php(24): FOGPageManager->render() #4 {main} thrown in /var/www/html/fog/lib/fog/fogftp.class.php on line 29

I have also used your script in bashrc and specified my proxy username pass ip and port


          Comment on Mount remote filesystem via ssh protocol using sshfs and fuse [Fedora/RedHat/Debian/Ubuntu way] by How do you remotely administer your Linux boxes? - Just just easy answers   
[...] set up FUSE to (auto)mount your remote volume. [...]
          Comment on Nokia E-series sync with Evolution via Bluetooth in Ubuntu by virtual dj free download   
This speaker package is perfect for the beginner DJ. Select the "Browse" tab on the menu to open a link to your computer and media files. These are base, drums, tambourine, piano, and guitar.
          Comment on Convert WMV into AVI with Ubuntu by raspberry ketones cvs   
Pretty nice post. I just stumbled upon your weblog and wanted to say that I have truly enjoyed surfing around your blog posts. In any case I'll be subscribing to your rss feed and I hope you write again very soon!
          Linux Games - Urban Terror Review   
Available from www.urbanterror.info or from www.playdeb.net if you're using ubuntu, I take a brief look at one of the more popular games on ... tags: FedoraGamesLinuxOpenReviewSourceTerrorLinux Games - Urban Terror Review
Linux Overmonnow
          Install Driver nVidia di Ubuntu   

Cara menginstall driver nVidia di Ubuntu sangatlah simple, cara ini sudah saya praktekkan di Ubuntu 13.10. Jika Anda ingin mencobanya di versi dibawah 13.10 saya tidak bertanggung jawab bila terjadi error ^_^

Pertama bukalah terminal Anda lalu ketikkan:
sudo apt-get update
sudo apt-get install nvidia-current

Setelah selesai instalasi selesai, reboot komputer Anda
sudo reboot

Semoga membantu
          Install cURL di Ubuntu   

Cara menginstall cURL di Ubuntu cukup mudah dan simple, tetapi saya baru mencobanya pada Ubuntu 12.04 dan 13.10, jadi saya tidak menyarankan cara ini untuk versi dibawah 12.04.

Buka terminal Anda lalu ketikkan perintah dibawah ini:
sudo apt-get install curl libcurl3 libcurl3-dev php5-curl


Setelah selesai diinstall, restart webserver Anda:
sudo /etc/init.d/apache2 restart
atau:
sudo service apache2 restart


Semoga membantu
          Membetulkan Grub Rescue Ubuntu   
Empat hari yang lalu ketika saya menginstall ulang Ubuntu saya, terjadi error grub rescue. Setelah bertanya pada om kita semua (google :v), saya berhasil menemukan solusinya. Dan sekarang saya akan membagikan solusinya bagi Anda yang terkena masalah ini juga. Tapi ini hanya untuk Ubuntu Desktop, tidak untuk Ubuntu Server. :)

Untuk memperbaiki masalah ini, Anda harus menyiapkan Ubuntu Desktop Live CD Anda karena kita akan membutuhkannya.

Catatan: Jika Anda tidak mengikuti perintah-perintah dibawah ini dengan benar, kemungkinan Anda akan mendapatkan error : /usr/sbin/grub-probe: error: cannot stat 'aufs'

Langkah-langkah yang harus Anda lakukan adalah:
1. Lakukan boot melalui Ubuntu Desktop Live CD Anda.
Jika muncul tampilan seperti ini klik "Try Ubuntu"
2. Jika tidak muncul tampilan seperti di atas, bukalah Terminal dengan menekan:
    CTRL + ALT + T

3. Ketik perintah ini di terminal: sudo fdisk -l
    Perintah tersebut akan menampilkan seluruh partisi Anda.










     Partisi dengan System berupa Linux adalah partisi untuk Ubuntu Anda.
     Pada screenshot partisi tersebut berada di /dev/sda11

4. Mount partisi ubuntu Anda dengan mengetikkan perintah:
    sudo mount /dev/sdXX /mnt  (contoh: sudo mount /dev/sda11 /mnt)

5. Lalu ketik perintah:
    sudo mount /dev/sdXX /mnt/boot

6. Mount virtual filesystems:
    sudo mount --bind /dev /mnt/dev 
    sudo mount --bind /proc /mnt/proc
    sudo mount --bind /sys /mnt/sys

7. Agar hanya grub utilities dari Live CD saja yang dieksekusi, ketik:
    sudo mount --bind /usr /mnt/usr
    sudo chroot /mnt

8. Lalu ketik:
    update-grub
    update-grub2

9. Install ulang grub
    grub-install /dev/sdX (contoh: grub-install /dev/sda tidak perlu menggunakan angkanya)

10. Verifikasi grub yang diinstall
     sudo grub-install --recheck /dev/sdX

11. Keluar dari chroot dengan menekan CTRL + D

12. Unmount virtual filesystems
     sudo umount /mnt/dev
     sudo umount /mnt/proc
     sudo umount /mnt/sys
     sudo umount /mnt/boot

13. Unmount Live CD /usr
     sudo umount /mnt/usr

14. Umount device terakhir
     sudo umount /mnt

15. Reboot
     sudo reboot

Sekian dari saya untuk hari ini, sampai jumpa lagi dilain topik. ^_^

Sumber : http://opensource-sidh.blogspot.com/2011/06/recover-grub-live-ubuntu-cd-pendrive.html
          Firefox hanya bisa dibuka oleh Root   
Apa anda sedang mengalami masalah karena firefox pada Ubuntu Anda hanya bisa dibuka oleh user root? Kalau iya, maka Anda berada ditempat yang tepat. :)

Beberapa waktu yang lalu saya mengalami masalah ini, setelah mengacak-acak google, akhirnya saya menemukan solusinya. Dan ternyata solusinya cukup mudah. :)
Pertama-tama tutup terlebih dahulu firefox Anda, kemudian buka terminal dan ketikkan command berikut:

sudo chown -R $USER:$USER ~/.mozilla

Jreeeng, firefox Anda sudah bisa dibuka oleh user selain root. :)
Sekian dari saya, sampai jumpa dilain topik. ^_^
          Chatting Facebook menggunakan Empathy Ubuntu   
Sejak 2 hari yang lalu internet saya sedang mengalami gangguan jadi tidak bisa mengupdate blog saya. Tapi alhamdulillah sekarang internet saya sudah kembali lancar.. Yeah.. ;)

Pada posting kali ini saya akan berbagi cara chatting facebook tanpa login ke facebook dengan menggunakan program chat bawaan Ubuntu, yaitu Empathy. Agar waktu tidak terbuang sia-sia, kita mulai saja tutorialnya.. ^^
  1. Pertama Anda harus mendownload sebuah plugin facebook disini http://adf.ly/6JvTc
  2. Lalu install plugin tersebut.
  3. Jika sudah selesai di install, buka Empathy Anda. (Application >> Internet >> Empathy IM Client)
  4. Pilih menu Edit >> Account atau bisa menggunakan tombol F4.
  5. Setelah itu klik tombol Add yang ada di kiri bawah. Lalu pada bagian protocol pilih Facebook Chat kemudian masukkan username dan password Anda.
  6. Kemudian klik Apply.
*Note : "username bukanlah email Anda, tetapi url profile facebook Anda. Contoh : http://www.facebook.com/yoanpratamaputra jadi yang merupakan username Anda adalah yoanpratamaputra"

Sekian dari saya, semoga bermanfaat bagi kita semua.. ^_^

          Cheese, Ambil Photo dan Video melalui WebCam di Ubuntu   
Anda sedang mencari program untuk mengambil photo atau video menggunakan webcam di ubuntu? Jika Ya maka disinilah tempat yang benar. Dalam artikel kali ini saya akan memberikan satu program open source yang bisa digunakan untuk mengambil photo atau video melalui webcam. Program ini bernama Cheese. :)
Sepertinya Anda tampak terburu-buru sekali, kalau begitu mari kita mulai cara penginstalannya. :D

Sebenarnya cara menginstallnya sederhana saja, karena data program ini sudah tersedia pada repository ubuntu kita. Jadi kita mulai dengan mengetikkan perintah dibawah ini untuk mengupdate terlebih dahulu repository kita siapa tahu ada update terbaru dari program-program kita atau cheese.
$ sudo apt-get update
Setelah itu, kita mulai installasi cheesenya dengan perintah berikut.
$ sudo apt-get install cheese
Mudah bukan?? ^^

Sekian dari saya,  semoga bermanfaat bagi kita semua.. ^_^

          Menghapus Repository PPA Ubuntu   
Ada apa? Anda bingung cara menghapus repository ppa ubuntu? Tenang, kali ini saya akan membahas tentang itu kok. Jadi Anda tinggal duduk manis di depan komputer Anda dan ikuti langkah-langkah yang saya berikan di bawah ini. :D

Sebenarnya menghapus repository ppa itu tidak sulit, hanya butuh beberapa detik dengan klik-klik saja. :)
Wah, sepertinya Anda sudah tidak sabar ya? Ya sudah kita mulai saja. ;)

Pertama buka Software Source Anda. (System >> Administration >> Software Source)
Jika sudah terbuka, pilih tab Other Software seperti gambar di bawah ini.
Beri ceklist pada ppa yang ingin Anda hapus lalu klik Remove.
Jika sudah selesai di remove, sekarang klik Close. Setelah itu akan ada prompt permintaan reload, pilih saja reload agar memperbarui ppa yang ada. :D


Sekian dari saya, semoga bermanfaat bagi kita semua.. ^_^

          Upgrade Mozilla Firefox 4 di Ubuntu   
Tanggal 24 Maret 2011, Firefox baru saja merilis versi 4 nya setelah melakukan dengan beberapa versi beta dan RC nya. Firefox versi 4 ini benar-benar mendapat respon yang sangat baik dari masyarakat dunia, khususnya para peselancar internet. Bahkan katanya, Firefox sudah di download lebih dari 5 juta orang saat dirilis. Wow, bukan main ketenaran browser yang satu ini.

Sekarang mari kita bahas cara mengupgradenya melalui Ubuntu. :D
Pertama saya ingin memberitahu bahwa Firefox 4 hany support dengan Ubuntu versi 10.04 keatas. Jadi bagi Anda yang masih menggunakan 9.10 kebawah, mau tidak mau Anda harus menginstall ulang Ubuntu Anda atau melakukan upgrade melalui Update Manager.

Bagi Anda pengguna 10.04 keatas, ayo kita mulai tutorialnya. Sebenarnya caranya masih cukup sederhana seperti mengupgrade Firefox yang pernah saya bahas pada postingan sebelumnya.

Buka terminal Anda dan ketikkan perintah-perintah di bawah ini.
$ sudo add-apt-repository ppa:mozillateam/firefox-stable
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install firefox ubufox
Setelah selesai, buka Firefox Anda. Tadaaa, Firefox 4 sudah terinstall. :D

Sekian dari saya, semoga bermanfaat bagi kita semua. ^_^

          Cara Uninstall LAMP yang Benar   
Dua hari yang lalu saya baru memposting tentang cara install LAMP, nah sekarang saya akan memberi tahu cara menguninstallnya bagi yang membutuhkan.

Banyak orang yang menguninstall dengan cara dibawah ini.
$ sudo apt-get remove lamp-server^
Padahal cara tersebut adalah salah! Menguninstall LAMP dengan cara tersebut tidak hanya me-remove LAMP tetapi juga akan me-remove sebagian dari program Anda yang ada seperti, Open Office, Games, dll.
Tapi bagi Anda yang tidak percaya boleh membuktikannya sendiri. Saran saya hanya persiapkan CD installer ubuntu bagi yang ingin membuktikannya. :D

Baiklah bagi Anda yang ingin menguninstall LAMP dengan benar, ikuti cara-cara di bawah ini :

Pertama yang harus Anda lakukan adalah menguninstall phpMyAdmin.
$ sudo apt-get purge libapache2-mod-auth-mysql phpmyadmin
Jika keluar prompt, pilih saja Yes.

Kedua adalah menguninstall MySQL.
$ dpkg -l | grep ^ii | grep mysql-server | awk -F' ' '{ print $2 }'
Contoh: Terminal saya memberikan list ini.
mysql-server
mysql-server-5.1
mysql-server-core-5.1
php5-mysql
Jadi ayo kita uninstall dengan memilih yang mengandung kata "mysql-server".
$ sudo apt-get purge mysql-server mysql-server-5.1 mysql-server-core-5.1

Ketiga adalah menguninstall apache.
$ dpkg -l | grep ^ii | grep apache2 | awk -F' ' '{ print $2 }'
Akan muncul list lagi.
apache2
apache2-mpm-prefork
apache2-utils
apache2.2-bin
apache2.2-common
libapache2-mod-php5
Lalu ketikkan perintah seperti di bawah ini.
$ sudo apt-get purger apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common libapache2-mod-php5

Yang terakhir adalah pembersihan. :D
$ sudo apt-get autoremove

Sekian dari saya, semoga bermanfaat bagi kita semua. ^_^

          Menangani phpMyAdmin yang Not Found di Ubuntu   
Apa Anda pengguna Ubuntu? Apa Anda baru menginstall phpMyAdmin? Apa phpMyAdmin Anda "Not Found"? Jika jawabannya "iya" maka disinilah tempat Anda mendapatkan solusinya.

Disini saya akan membahas bagaimana cara menangani phpMyAdmin Anda yang Not Found menjadi Found. Hehehe.. ;)

Baiklah kita mulai saja, pertama buka terminal Anda dan jalankan perintah di bawah ini.
$ ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf
$ sudo /etc/init.d/apache2 restart
Jika sudah selesai, buka browser Anda dan ketikkan url "localhost/phpmyadmin" tanpa tanda petik.
Tadaa, phpmyadmin Anda sudah ditemukan. :D

Sekian dari saya, semoga bisa bermanfaat bagi kita semua. ^_^

          Cara Install LAMP di Ubuntu   
Sedikit review tentang Lamp dari wikipedia :
LAMP adalah istilah yang merupakan singkatan dari Linux, Apache, MySQL dan Perl/PHP/Phyton. Merupakan sebuah paket perangkat lunak bebas yang digunakan untuk menjalankan sebuah aplikasi secara lengkap.
Komponen-komponen dari LAMP:
  • Linux sebagai sistem operasi
  • Apache HTTP Server sebagai web server
  • MySQL sebagai sistem basis data
  • Perl atau PHP atau Pyton sebagai bahasa pemrograman yang dipakai
 Jika Anda sudah tahu tentang Lamp tetapi masih bingung cara menginstallnya, mari kita lakukan bersama sesuai tutorial di bawah ini :

Pertama buka terminal Anda (Applications >> Accessories >> Terminal) lalu ketikkan perintah di bawah ini :
$ sudo apt-get install lamp-server^
Catatan: tanda (^) bukan merupakan kesalahan pengetikan.

Jika penginstalan sudah selesai akan muncul sebuah prompt permintaan password root MySQL. Isi dengan password yang Anda inginkan. Lalu akan muncul lagi sebuah prompt, kali ini permintaan confirm password. Ketikkan kembali password Anda tadi. Setelah itu, Lamp akan kembali menginstall.

Jika Lamp sudah selesai di install, buka browser Anda dan ketikkan http://localhost/ pada url bar Anda. Jika muncul tulisan dengan header "It Works!" berarti instalasi Anda sukses.

Selanjutnya, untuk menginstall phpmyadmin ketikkan perintah berikut pada terminal Anda :
$ sudo apt-get install libapache2-mod-auth-mysql phpmyadmin
Saat installasi phpmyadmin dimulai, akan muncul prompt sebanyak empat kali. Lakukan 4 hal di bawah ini untuk tiap-tiap prompt yang keluar :
  1. Berupa pilihan. Pilih apache2 lalu tekan Enter.
  2. Berupa pilihan lagi, gunakan tombol Tab untuk memilih dan pilih Yes.
  3. Sebuah prompt permintaan password root untuk MySQL Anda, isi dengan password yang Anda inginkan.
  4. Sebuah prompt permintaan konfirmasi password, isi dengan password root MySQL Anda.
Setelah installasi phpmyadmin selesai, buka browser Anda dan ketikkan http://localhost/phpmyadmin untuk melihat halaman phpmyadmin Anda.

Sekian dari saya, semoga bisa bermanfaat untuk kita semua. ^_^

          Cara Upgrade Mozilla Firefox di Ubuntu   
Anda baru pertama kali menggunakan linux ubuntu? Bingung cara upgrade mozilla firefoxnya? Tenang, Anda sudah berada di tempat yang tepat kok. Disini saya akan menjelaskan bagaimana cara mengupgrade mozilla firefox di ubuntu agar firefox Anda selalu up to date.. Hehehe

Mengupgrade firefox dengan cara ini tidak akan menghilangkan bookmark dan data history Anda, tetapi mungkin akan ada beberapa add ons yang tidak bisa berfungsi lagi jika add ons tersebut tidak support dengan firefox kita yang baru.

Tampaknya Anda sudah tidak sabar ya, baiklah kalau begitu kita mulai saja tutorialnya.. ;)

Cara ini sebenarnya sangat sederhana, buka terminal Anda. (Application > Accesories > Terminal)
Lalu ketikkan perintah ini :
$ sudo apt-get update
$ sudo apt-get install firefox
Penjelasan :
  • $ sudo apt-get update : "perintah ini berfungsi untuk mencari semua data update untuk semua program yang ada di komputer Anda. Tetapi perintah ini tidak langsung menginstallnya secara otomatis."
  • $ sudo apt-get install firefox : "perintah ini berfungsi menginstall firefox menggunakan data update yang sudah dicari menggunakan perintah sebelumnya."

Sekian tutorial dari saya, semoga tutorial di atas bermanfaat bagi kita semua. :D

          Cara Install Modem di Ubuntu 9.10   
Disini saya akan menggunakan wvdial jadi sebaiknya download dulu wvdial disini:
  1. libuniconf4.4_4.4.1-0.2ubuntu2_i386.deb
  2. libwvstreams4.4-base_4.4.1-0.2ubuntu2_i386.deb
  3. libwvstreams4.4-extras_4.4.1-0.2ubuntu2_i386.deb
  4. libxplc0.3.13_0.3.13-1build1_i386.deb
  5. wvdial_1.60.1_i386.deb
Setelah anda mendownload wvdial silahkan install 4 file tersebut melalui terminal. Misalkan Anda mendownload file tersebut di folder Downloads, maka:


~$ cd Downloads
~$ sudo su

# sudo dpkg -i libxplc0.3.13_0.3.13-1build1_i386.deb
# sudo dpkg -i libwvstreams4.4-base_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i libwvstreams4.4-extras_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i libuniconf4.4_4.4.1-0.2ubuntu2_i386.deb
# sudo dpkg -i wvdial_1.60.1_i386.deb

# exit
Istilah-istilah:
  • Cara buka terminal. Klik Application –> Accessories –> Terminal.
  • Biasanya saat di terminal ada kalanya membutuhkan password. Dan password yang anda ketikkan tidak akan kelihatan jadi teruskan saja ketikkan password anda. Kemudian tekan Enter dan kalau sukses maka perintah yang anda masukkan akan dieksekusi. Dan tidak meminta password lagi.
CARA #1
  1. Sebelum install cabut semua usb device di komputer Anda kecuali mouse. :D
  2. Setelah itu, colok modem Airflash/apapun modem anda ke komputer. Tunggu sampai muncul drive baru seperti biasa ketika kita colok USB Flash Drive.
  3. Kemudian klik kanan drive baru tersebut klik EJECT bukan yg lain.
  4. Buka terminal dan ketikkan perintah berikut lsusb
  5. Maka akan muncul beberapa baris kalimat seperti ini :
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 003: ID 21f5:2008
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 003: ID 0ac8:3343 Z-Star Microelectronics Corp. Sirius USB 2.0 Camera
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  6. Perhatikan yang saya bold. 21f5 = id vendor sedangkan 2008 adalah id produk. Kalau modem Anda tipe lain, biasanya seperti id vendor dan id productnya akan berbeda.Untuk mengenali id vendor modem biasanya sih tidak ada embel apa-apa lagi di belakangnya. Atau kalaupun ada embel2 lagi biasanya qualcomm, dll dari perusahaan telekomunikasi.
  7. Masih di terminal dan ketikkan kode berikut gedit /etc/udev/rules.d/99-evdo-modem.rules
  8. Akan terbuka seperti notepad setelah itu copy dan paste kode berikut SYSFS{idVendor}==”21f5”, SYSFS{idProduct}==”2008″, RUN+=”/usr/bin/eject %k kemudian save.Note: Pada {id vendor} dan {id product} sesuai modem Anda.
  9. Kemudian ketikkan perintah ini di terminal sudo modprobe usbserial vendor=0x21f5 product=0×2008
  10. Selanjutnya ketikkan dmesg
  11. Saya asumsikan Anda sudah menginstall wvdial, kalau belum maka Anda harus install wvdial dulu sebelum bisa melangkah ke step selanjutnya. Kalau sudah maka step ini bisa dilanjutkan. Ketikkan perintah sudo gedit /etc/wvdial.conf maka akan muncul seperti notepad dan selanjutnya tambahkan kode berikut ini (jangan hapus kode yg sudah ada disana) :
    [Dialer flexi]
    Auto DNS = on
    Init1 = ATZ
    Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    Stupid Mode = yes
    Modem Type = Analog Modem
    ISDN = 0
    New PPPD = yes
    Phone = #777
    Modem = /dev/ttyUSB0
    Username = *******@free
    Password = ******
    Baud = 460800
    Dial Command = ATDT
    FlowControl = CRTSCTS
    Ask Password = 0
    Stupid Mode = 1
    Compuserve = 0
    Idle Seconds = 3600
  12. Sekarang jalankan perintah : sudo wvdial flexi
  13. Jika muncul tulisan berikut ini :
    –> WvDial: Internet dialer version 1.60
    –> Cannot get information for serial port.
    –> Initializing modem.
    –> Sending: ATZ
    ATZ
    OK
    –> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    OK
    –> Modem initialized.
    –> Idle Seconds = 3600, disabling automatic reconnect.
    –> Sending: ATDT#777
    –> Waiting for carrier.
    ATDT#777
    CONNECT
    –> Carrier detected. Starting PPP immediately.
    –> Starting pppd at Sun Jan 3 15:19:52 2010
    –> Pid of pppd: 2054
    –> Using interface ppp0
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> pppd: ?u` @l`
    –> local IP address 1**.1*.1**.2** –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> remote IP address 1*.1*.2*.* –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> primary DNS address 1**.1**.1**.1** –> ini mungkin berbeda
    –> pppd: ?u` @l`
    –> secondary DNS address 1**.1**.1**.1** –> ini mungkin berbeda
    –> pppd: ?u` @l`Itu berarti anda sudah terkoneksi ke internet. Dan masih di terminal tekan CTRL + Shift + T kemudian ketik perintah berikut : ping -s -t yahoo.com
  14. Untuk disconnect atau mematikan koneksi tersebut anda cukup tekan CTRL + C
Cara ini juga bisa digunakan pada modem usb lainnya.. :D
Semoga bermanfaat..

          NINJA-IDE 3.0-alpha is Here!!   



NINJA-IDE 3.0-alpha is finally here!!!

After lots of months rewriting NINJA-IDE to implement the new architecture (which I must say, is going to bring new amazing things about: performance, extensibility, stability, and easy a lot of things for developers of plugins and the core of ninja), we are finally here with the 3.0-alpha version.
There are a lot of things that still needs to be implemented, several features to add, a bunch of issues to fix, etc, etc...

BUT if you are one of the brave ones, you can start playing around with NINJA-IDE, and let us know your ideas of what needs to be improved, problems you experienced, and everything you think would help us to make NINJA-IDE 3.0 the most awesome version so far!
As usual, we are developing NINJA-IDE 3.0 using NINJA-IDE 3.0, but it's always nice to hear from people using the IDE in different ways.

For the alpha version, we are not going to distribute installers for the different platforms, but if you want to start testing NINJA-IDE 3.0-alpha, you can grab the source code from Github and run it from sources (We've always tried to make it really easy to execute it this way for developers), also, if you are in Ubuntu, you can use the Daily PPA and install the last version from there.

Take a look at some of the visual changes that we have done in NINJA-IDE 3.0:



Now that we have the new architecture implemented in NINJA-IDE (something that we knew was going to take a lot of time, but the payoff was going to be huge!), we can start working on fixing the remaining issues, add some of the missing features, AND INCLUDE THE NEW FEATURES THAT WE PLANNED FOR THIS VERSION!! I can tell you that an incredible effort is being done from everyone in NINJA-IDE to build some really awesome things:
  • A whole new plugin environment, with virtualenv detection, pypi handling, etc
  • Lots of improves in Code Completion
  • Graphical Debugger
  • A new IntelliSensei module to bring to you the magic of metadata analysis
  • Several improves in the UI and UX
  • And lots of more things!!!! But... LOTS!!


And that it's not ALL!! The Web Team of NINJA-IDE is also working in a new website, to improve some internal functionalities, and to provide new sections, as a new blog, where we want to publish all the news about the new features being developed, updates in the documentation, videos about how to use and interact with ninja, and lots of more stuff!


Links:

          Charla: Haciendo codigo que analiza codigo con AST   
Les dejo el video y material de la charla que tuve la oportunidad de poder dar en la PyCon España:


Haciendo codigo que analiza codigo con AST:




Presentacion(Filminas)
          Mis charlas en PyCon Uruguay 2013   
Este año tuve la oportunidad de asistir y dar 2 charlas y una lightning talk en la PyCon de Uruguay, la verdad el evento me gusto muchisimo!! La gente muy buena onda, y el lugar y la organización me parecieron geniales.

Espero poder volver a ir el año que viene, porque fue una muy buena experiencia!

Aca les dejo los videos y material de las charlas que di (que ahora me doy cuenta que me habia olvidado de hacer un post al respecto):


Introducción a PyQt
 


Lightning Talk: Documentor
 


NINJA-IDE, un IDE especialmente diseñado para Python
 



Material de las charlas:


Y aca dejo algunas fotos (muy pocas) que saque durante el evento: PyCon Uruguay 2013
          TvStalker (Ubuntu Touch client and web integration)    
Client for Ubuntu Touch of TvStalker, you can see how the changes in your account affects what you see in the browser and what you see in the tablet, and how fast is to browse and check out your tv shows in the device.


          PyDay Junin   
Ayer estuve participando en el PyDay de Junin!!
El evento estuvo re bueno y como siempre lleno de buena onda y mucha gente de la que podes aprender muchas cosas.

Aca dejo el material de mis charlas:


          Tabu Party Game (Web + Ubuntu Client)   
These last 2 days i've been working in this app: Tabu Party Game for the Ubuntu App Showdown Contest. You can create Languages and Cards for each Languages from the web, and Play it from the web (not ready yet) and from the Ubuntu Client for Mobile and Desktop.
This is a video with a complete demonstration of the UI, the web, how to create cards, etc, etc:


          TvStalker Client for Ubuntu (Desktop and Mobile)   
I've playing these last days with a QML Application that i'm doing for the Ubuntu App Showdown Contest, the idea is to do an application where you can track all your Tv Shows, find out when a new episode is released, check out the airdates calendar of each Tv Show, discover new recommended shows, share tv shows with your friends, and get recommendations from them, and a lot of cool stuff!!
I've a semi-working web backend for this, and i've been moving forward with the UI for the desktop/mobile client.

Now the UI/UX is "complete" (at least i decide to change anything), this is a video of how it looks like:




Now it's time to keep working in the backend!!! (I've 2 more apps to implement for the contest after this one :D)
          thank   
thank for all Download: Ubuntu 12.10







          Synching your enviroment around   
Ubuntu 9.10 has just came out & iv rushed to upgrade, upgrading my 9.04 to 9.10 via the upgrade manager is a working option but usually id rather do a clean install & get a fresh system to poke around.
The main issue that clean install brings is restoring your old environment which includes installed programs, settings files (.bashrc, .vimrc etc..), non packaged applications (usually the latest & greatest versions of Clojure, Groovy etc..).
Another issue is the need to restore these settings on multiple machines while keeping it all consistent between them, there must a better way than working it all manually!

Well the solution that iv come with involve two tools, Dropbox & AutomateIt, Dropbox is no more than a fancy folder to the cloud synchronizer which enables me to access all my settings file from any machine, lsym-ing my .bashrc, .vimrc & any other file that id like to share across machines into the Dropbox folder is all that is required.

The second part involves AutomateIt, which is a configuration management framework written in Ruby, an easy way of re-creating symlinks from my Dropbox folder onto a new system is using the following AutomateIt script:

HOME = ENV['HOME']
DROP = "#{HOME}/Private/Dropbox"

def link(src,dst)
ln_s("#{DROP}/#{src}","#{HOME}/#{dst}")
end

# Vim
link "vim/.vimrc",".vimrc"
link "vim/.vim",".vim"
link "vim/.vimclojure-2.1.2",".vimclojure"
# Bash
link "BashEnv/.bashrc",".bashrc"
link "BashEnv/.inputrc",".inputrc"
# Languages
link "prog-langs/.clojure",".clojure"
link "prog-langs/.groovy",".groovy"
link "prog-langs/.jruby",".jruby"

The nice thing about it is that it hides all the nitty gritty details that id have to figure out when using plain Ruby (or any other environment for that matter), AutomateIt wraps common tasks with a bash like DSL, unlike bash this DSL is portable across multiple unix systems & it hides many of the complexities that follow.
AutomateIt performs actions only when they are required, this solves annoying cases that rise when scripts run more than once on a system (e.g. appending text to files).

Another cool feature is package management, AutomateIt is capable of interacting with multiple packaging systems (yum, apt, gem etc..) in a transparent way, its easy to replicate installed packages onto multiple systems:

# bash
%w(terminator rlwrap).each{ |p| package_manager.install p}

# programming
%w(vim git-core).each{ |p| package_manager.install p}

# communication
%w(deluge openssh-server).each{ |p| package_manager.install p}

# misc
%w(gpodder gnome-do).each{ |p| package_manager.install p}

# installing dropbox
download_manager.download 'http://www.getdropbox.com/download?dl=packages/nautilus-dropbox_0.6.1_i386_ubuntu_9.10.deb' , :to => '/tmp/nautilus-dropbox_0.6.1_i386_ubuntu_9.10.deb'
package_manager.install ({'nautilus-dropbox' => '/tmp/nautilus-dropbox_0.6.1_i386_ubuntu_9.10.deb'} , :with => :dpkg)

Using these tools has made configuration nirvana a bit closer to me & hopefully to you :)
          NILFS on Jaunty   
The latest release of Ubuntu includes the long awaited Ext4 FS (works flawlessly on my system).
Ext4 is faster & more secure but still lacks the ability to manage FS snapshots (ZFS excels in that, but runs only in FUSE on linux).
An interesting alternative is NILFS:
NILFS is a log-structured file system supporting versioning of the entire file system and continuous snapshotting which allows users to even restore files mistakenly overwritten or destroyed just a few seconds ago.


NILFS maintains a repo for Hardy, the Jaunty repos contain only the user land tools which won't do us much good since we need the kernel module as well, this leaves us only with the option of installing it from source (still quite easy).


# a perquisite
$ sudo aptitude install uuid-dev
# installing kernel module, result module resides in /lib/modules/2.6.28-11-generic/kernel/fs/nilfs2/nilfs2.ko
$ wget http://www.nilfs.org/download/nilfs-2.0.12.tar.bz2
$ tar jxf nilfs-2.0.12.tar.bz2
$ cd nilfs-2.0.12
$ make
$ sudo make install
# installing user land tools
$ wget http://www.nilfs.org/download/nilfs-utils-2.0.11.tar.bz2
$ tar jxf nilfs-utils-2.0.11.tar.bz2
$ cd nilfs-utils-2.0.11
$ ./configure
$ make
$ sudo make install

Creating a file system on a file (ideal for playing around):

$ dd if=/dev/zero of=mynilfs bs=512M count=1
$ mkfs.nilfs2 mynilfs

The FS is only a mount away:

# mounting the file as a loop device
$ sudo losetup /dev/loop0 mynilfs
$ sudo mkdir /media/nilfs
$ sudo mount -t nilfs2 /dev/loop0 /media/nilfs/

Now lets create a couple of files:

$ cd /media/nilfs
$ touch 1 2 3
# listing all checkpoints & snapshots, on your system list should vary
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 cp - 12 6
# creating a snapshot
$ sudo mkcp -s
# 15 is the new snapshot (mode is ss)
$ lscp
CNO DATE TIME MODE FLG NBLKINC ICNT
7 2009-05-01 01:08:09 ss - 12 6
13 2009-05-01 19:05:34 cp i 8 3
14 2009-05-01 19:05:59 cp i 8 3
15 2009-05-01 19:07:09 ss - 12 6
16 2009-05-01 19:08:59 cp i 8 6

# our post snapshot file
$ touch 4

Now lets go back in time into our snapshot, NILFS enables us to mount old snapshots as read only FS (while the original FS is still mounted):

$ sudo mkdir /media/nilfs-snapshot
$ sudo mount.nilfs2 -r /dev/loop0 /media/nilfs-snapshot/ -o cp=15 # only snapshots works!
$ cd /media/nilfs-snapshot
# as we might expect
$ ls
1 2 3

NILFS has some interesting features, its not production ready yet however it sure worth looking after its development.
          VMware recovery methods   
Iv been using VMware server and player for a time now and usually the hosted OS is XP, it happens to be that iv had the "luck" to witness the crash/corruption of such images and have devised two nifty tricks/methods of data recovery that i think worth sharing.
The first is for data recovery for an unbootable image, it matches cases in which you don't seem to be able to boot XP up but the image itself is readable & loaded by VMware (you see the black screen loading), the basic idea is to change the first boot device of the image to a Linux live cd (Ubuntu works perfectly), after booting up all you need to do is to mount the NTFS folder and copy it out (by scp, thumb drive, etc..).
The second one is more complex and comes to handle a case in which the image dose not load at all (vmware complains about a corrupted vmx file), in order to recover from this state there are a couple of prequisits:

  • VMware server installed.

  • You have another working XP image with a vmx file.

  • The corrupted image has a snapshot of a working state


The solution in this case is to copy the working vmx file into the folder of the corrupted image, load this image in VMware server and replace all the hard drive entries with the corrupted machine images, now all that is done is to revert to the last saved snapshot and see how the system is restored!
I hope that these two little methods help you as much as they helped me.
          The spreading Java and Ubunut love   
These days it seems that the Java community is more and more Ubuntu friendly, the integration of Java and Linux (Ubuntu in particular) has been steadily improving making it easy to install (with a bit tinkering) Java 6 update 10, Netbeans and other Java goodies.
The Linux and Java open spirit are starting to merge, these are good times to be a Java developer on Linux systems.

Note:
It seems as if in Ubuntu 8.10 running sudo apt-get install sun-java6-jdk installs 6 update 10 by default.
          Ubuntu...I love you, I hate you   

 I have been working on seeing if a .NET 3.5 application will port over to Linux, Ubuntu to be specific. I started with version 9.01, then 9.10 and now 10.04 as I find more and more that I need from Mono. I have a dual boot on a dev box, Windows 7 and Ubuntu.

An upgrade from Ubuntu 9.01 to 9.10 caused my mouse and keyboard to lock up. 

I was able to boot from a 9.10 cd. Then, I upgraded to 10.04 as I needed Mono 2,2. Upgrade worked, lost my windows boot though. it seems grub somehow jumped in and messed up the windows boot.

After Googlign liek crazy and trying this and that, these 2 links finally got me my windows boot back:

http://sourceforge.net/apps/mediawiki/bootinfoscript/index.php?title=Boot_Problems:Boot_Sector

http://support.microsoft.com/kb/927392

So, I am now thinking about trying SuSe instead as I hear\read it's more stable. I think a lot of my pains have been related to learning and getting use to Linux.  

 

 

 

 

 


          I need to replace my netbook   
Help me replace my eee901 netbook. My needs are annoyingly specific. For the last four years I’ve had an Asus eee901 which I’ve really liked, but it is now dying. I’ve loved the size and portability and, by putting in a decent 64GB SSD and extra RAM and using Ubuntu, I’ve mitigated the slowness of the N270 Atom chip somewhat. I use it mainly for email, word processing and browsing so I’m not that bothered about the speed, although it would be nice to be able to watch smooth videos. The keyboard I’ve learned to live with. In fact, the only thing that really annoys me is the low resolution of the screen (1024x600).

So, it’s replacement time and, irritatingly, netbooks have barely moved on and don’t come in 8.9 inches any more. My needs are: highly portable, can run Ubuntu, better than 1024x600 screen, can write longish docs on (so no tablets please). I don’t play games and I don’t really want to spend up into Ultrabook territory (yeah, the Asus UX21 is cool and thin, but it has a big footprint and is £750+). I’m thinking a max of about £550. This is not my primary computer. I live in the UK.

So I find myself looking at some very different machines:

One is the Acer Aspire One 522 which has 1280x800 resolution. It has a C-60 processor and again I’d upgrade the RAM and put in an SSD. Here I’m worried that I’m looking at something that is little better than the four year old netbook I’m replacing. I’m also worried that Acer’s build quality isn’t great.

The second is a Lenovo x121e which is 11.6 inches (1366x768) and rather bigger. But it is a fairly serious laptop and far better engineered and specced than the Aspire. It’s pricier too, at about £400, but I’m not really concerned about that. What I am really concerned about is the extra size. I want something I can carry in a smallish bag without really noticing.

A third, outside, consideration is the Asus Transformer Prime. The size of this is fantastic. But I’m very, very unsure of being locked into Android or, if I’m lucky, some iffy future Ubuntu port. Also, I’ve heard that word processing capabilities on it are terrible.

Anyway, if anyone has any experience of these machines or any other suggestions, I’d be very grateful.
          Help me choose a monitor   
Help me a choose a widescreen monitor. The more I look, the more confused I get. Tell me what to do. Please. OK, so I know there have been threads on this before, but new monitors appear almost as quickly as banks disappear.

I'm currently using a cheapo, old Dell R172FPt, 4:3 TN panel monitor. Nothing wrong with it, but advertising tells me that my display should make a more positive and aspirational lifestyle statement. And besides, I like the shiny.

I currently use my monitor mainly for office / internet with an occasional bit of photography (just playing around, nothing professional). I would like to be able to read two docs side by side. It's pretty unlikely that I will ever watch movies on it or play games. Generally I'm a fairly undemanding user, although I'd rather buy something that's reasonably good quality. But what?

I guess I want at least 20 inches, probably 22 or perhaps even 24. I thought the HP w2207h looked pretty nice as did a few Dells and some Samsungs. So, I started looking in earnest and this is why I got really, really confused. Here are the details of my confusion.

Resolution: Do I want 1680 x 1050 or 1900 x 1200? Remember, I don't watch movies.

Panel type: Do I want the cheaper TN or the more expensive other types - and if so, which one? Will I notice the difference? Will it rock my world? I currently have a geriatric TN and it seems OK.

Size: I've looked in PC World and, frankly 22 inches seems plenty big enough. I don't particularly want a TV on my desk and besides monster screens always seem a bit vulgar to me. Is there a compelling 24" argument, given my needs?

Style: I would prefer my monitor be relatively understated and not to look like something out of an 80s apartment or some bizarre geek-fetish object. It will be sitting with a brushed aluminium PC case, but black would probably look better than mismatched silver.

Various: It must have a digital input and I quite like the idea of one that can swivel through 90 degrees although, frankly, I'll probably use this feature twice. The graphics card I have is a GeForce 7900 GT/GTO and I am using Ubuntu. The PC is pretty new and well specced. I'm in the UK, so no esoteric US brands like Westinghouse, pls; also some monitors are somewhat pricier here than a direct exchange rate comparison.

Price: This is why I'm so confused. It would seem that I could spend anything from about £130 (ASUS VW222S) to about £400 (Dell 2408). Ideally I'd prefer not to pay more than £200. Is that enough? As I say, I'm not that demanding a user...


All in all I don't have a clue. About five months ago, I built a new PC and asked for advice here. People were terribly helpful and offered good, practical advice. Once again, lease cast the bread of your wisdom upon the waters of my ignorance. Hopefully we'll wind up with more than soggy bread.
          Help needed building a PC   
I want to build a PC and, while I'm not quite at the summit of ignorance, I'm high in the foothills and the air's getting thin. I need people who know more than me to tell me what to do. Please. Vastly longer explanation follows. Partly because I need one and partly because anything is more interesting than one’s own supposedly ‘challenging and rewarding’ work, I’ve decided to build a new PC. The need bit is because I mainly use Ubuntu these days and my AGP X1650 Pro card and it will never, ever be friends. So I looked into getting an NVIDIA card and came to the conclusion that as my ageing dell 4600 is about 50% replaced anyway, I may as well finish the job.

So far my mid range wish list is:

Motherboard: Gigabit p35 DS3L
Processor: E8400
RAM: OCZ / Corsair PC6400 2GB
Case: Lian Li PC60 Plus / PC7 Plus / PCG 50 PCV 600
Graphics Card: ???? Something by NVIDIA PCI-e

PSU Already have, coolermaster 500W
DVDR Already have, Samsung
HDD Already have, 500GB SATA, 16GB, Seagate
Monitor, KB, M Already have

OS - Ubuntu / XP

My only other stipulations are:

I use mostly Ubuntu so the card has to be NVIDIA. I don’t play games (unless solitaire counts) but do use Photoshop and want something that’ll give a good looking picture and last a few years. A lot of people seem to think the 8800 series is great. But I really don't need this, do I?

As for the case, well everything other than Lian Li either seems to look either cheap or like a special effect from the Alien Franchise (to my eyes anyway). I just want something understated and tasteful and quite like brushed aluminium / aluminum.

Anyway, please anoint my ignorance with your cleverosity. Thanks.
          PARAMILITARES COLONIALISTAS FASCISTAS CHILENOS ASESINAN WERKEN Víctor Mendoza Collío (27) en la comunidad Requem Pillan, al interior de la localidad de Pidima, comuna de Ercilla, IX región:    

El asesinato de comunero Mapuche, la responsabilidad “política” de carabineros y las sospechas en integrantes del GOPE








En cada siniestro y cobarde asesinato que ha ocurrido en territorio Mapuche de comuneros , en el marco de conflictos de tierras ancestrales y que ha significado la responsabilidad directa de ciertos agentes policiales del estado, como ocurrió con el asesinato de Alex Lemun con un disparo en la frente y que involucró a un oficial de la institución que quedó impune. Los asesinatos de Jaime Mendoza Collio y Matías Catrileo, por la espalda y que involucró a dos integrantes del GOPE (Grupo de operaciones policiales especiales de carabineros), la institucionalidad de carabineros ha tratado de encubrirlos, refiriéndose a “enfrentamientos” y dando fe a falsas coartadas señaladas por los hechores.
Asimismo, coincidentemente, principalmente desde las directrices del diario el mercurio (Emol) de Agustín Edwards, un civil que sigue en la impunidad a pesar de todas sus responsabilidades en violaciones a los Derechos Humanos durante la dictadura militar y de los montajes, conspiraciones y cortinas informativas para menoscabar y desviar las atenciones de situaciones de fondo con respecto a las movilizaciones y protestas sociales que exigen reparaciones de tierras, situándolas Edwards como “terrorismo Mapuche” o “violencia rural”, sigue siendo activo en dar cobertura a encubrimientos de los asesinos de jóvenes Mapuche.
Este miércoles 29 de octubre de 2014,  fue asesinado a metros de la puerta de su casa el comunero mapuche Víctor Mendoza Collío  en la comunidad Requem Pillan, al interior de la localidad de Pidima, comuna de Ercilla, IX región. El comunero recibió un disparo mortal de escopeta en la clavícula. Mendoza era werken de su comunidad que se encontraba en un proceso de recuperación de tierras ancestrales. Según su familia y miembros de la comunidad esto se trató de un asesinato perpetrado por desconocidos y no por una riña o enfrentamiento entre comunidades como se ha dicho.
Por su parte, medios comerciales informativos señalaban que el hecho se habría tratado de “un conflicto entre comunidades”. El diario el Mercurio publicaba: “La policía pesquisaba esta noche un incidente ocurrido en una zona cercana a Ercilla, en la Región de La Araucanía, en el que resultó muerto un comunero mapuche”. Agrega: “La víctima fue identificada como Víctor Manuel Mendoza Collío, de 46 años de edad. Falleció en el área de la comunidad Requén Pillán”. Continúa señalando: “De acuerdo a los primeros reportes desde la zona, el hecho se relacionaría con un enfrentamiento entre comunidades indígenas”.
Cabe consignar que Rodrigo Melinao Licán, fue asesinado el 6 de agosto del 2013. Asesinato que aún sigue impune. Rodrigo fue rematado en cercanías de su casa ubicada al interior de la comunidad Rayen Mapu, en proceso de recuperación territorial, del Lof  Lolokos, en el sector de Pidima, comuna de Ercilla. Su cuerpo fue encontrado por su familia con impactos de escopeta, en el mismo lugar se encontraron los cartuchos y se estableció que le dispararon a poca distancia.
¿Qué se desprende?
Los asesinatos de jóvenes y comuneros Mapuche se han enmarcado con el propósito de instalar al interior de las comunidades estados de pavor. Asimismo, de polarizar y generar un ambiente de mayor tensión en los territorios donde se están sosteniendo procesos de recuperaciones de tierras por parte de comunidades Mapuche y que involucra los intereses de latifundistas de corte colonialistas y de empresas forestales, aliados de Agustín Edwards, dueño del Mercurio. El ambiente termina por perpetuar mayores actos represivos y mayor presencia de agentes policiales militarizados en los territorios.
Nuevamente aparecen cortinas que buscan desviar la atención sobre posibles responsabilidades. La ligereza de medios irresponsables en situar los hechos como “enfrentamiento entre comunidades indígenas”, como vocifera el Mercurio y otros medios que replican, no aparecen como actos casuales o de incompetencia periodística, sino como actos planificados.
Los hechos nuevamente ocurren en zonas donde existe una alta presencia y control policial, por parte de fuerzas represivas de carabineros y, quienes principalmente se han salido de todo protocolo y procedimiento en allanamientos o desalojos, han sido principalmente integrantes del GOPE.
Cabe consignar que para el asesinato de Jaime Mendoza Collio el 12 de agosto del 2009, quien participaba desarmado de las acciones pacíficas en los procesos de recuperación de tierras y que involucraba los intereses de empresas forestales como Mininco y Arauco, fue perseguido por varios kilómetros para luego ser asesinado con disparos por la espalda. Su asesino, Miguel Jara Muñoz, miembro del GOPE (Grupo de operaciones policiales especiales de carabineros).
El diario El mercurio dio amplia cabida a la tesis del “enfrentamiento” y al actuar del carabinero en legítima defensa. Un año más tarde, ante las comprobaciones de montaje y alterar medios de prueba, el medio de Edwards daba cabida informativa al general de carabineros de la zona: “El nuevo general jefe de la Novena Zona de Carabineros, Iván Bezmalinovic, descartó la existencia de un montaje por parte de personal del GOPE en la muerte del comunero mapuche Jaime Mendoza Collío, ocurrida en 2009 durante un enfrentamiento”, publicaba así el 24 de noviembre del 2010.
De acuerdo a las pericias e investigaciones, el GOPE había disparado su propio casco y chaleco antibalas para hacer creer que había sido un enfrentamiento. El Mercurio y el general Bezmalinovic, hoy represor en la Región de Bio Bio, insistían en que había sido un “enfrentamiento” y en “legítima defensa”.
Situación muy similar ocurrió con Matías Catrileo, asesinado por la espalda por el GOPE Walter Ramírez. El Mercurio publicaba el 4 de enero del 2008: “Comunero mapuche, Matías Catrileo, muere en enfrentamiento con Carabineros en el sector de Vilcún, en la Novena Región”.
Lo ocurrido con Rodrigo Melinao el 2013 y ahora con Víctor Mendoza Collio, se trataría de un nuevo modus operandi y que involucraría a asesinos mercenarios de alta preparación para escabullirse, actuar en la noche y matar cobardemente.
¿Qué relaciones mantienen ciertas jefaturas del GOPE y de Carabineros con Agustín Edwards dueño del Mercurio y presidente de la Fundación Paz Ciudadana?
¿Qué relación hay con unos panfletos aparecidos hace algunas semanas en la zona de Arauco (Región del Bio Bio) con el símbolo de patria y libertad, anunciando el asesinato o mutilación de “cualquier Mapuche”?
Es inconcebible que Gustavo Villalobos, actual director de la agencia nacional de inteligencia, continúa en su cargo, no solamente por incompetencia al no precisar y dirigir análisis e investigaciones al interior de ciertos poderes fácticos y mercenarios, sino a esta altura, por tantos años de complicidad, al no aportar en lo absoluto al esclarecimiento de evidentes contubernios entre ciertos sectores empresariales con agentes estatales policiales en los asesinatos de comuneros Mapuche.
Más de alguien dirá que no es prudente realizar tesis o conjeturas y polarizar aún más el ambiente, aunque sea evidente las responsabilidades, sin embargo se guarda silencio cuando ciertos agentes, sicarios y  Agustín Edwards, conspiran, mienten y acusan falsamente a integrantes del Pueblo Mapuche movilizado.

Alfredo Seguel

INFORMACIÓN RELACIONADA

Peñi Víctor Mendoza Collío fue asesinado frente a su casa por desconocidos y no en enfrentamiento entre comunidades http://www.mapuexpress.org/2014/10/30/peni-victor-mendoza-collio-fue-asesinado-frente-a-su-casa-por-desconocidos-y-no-en

Victor Mendoza Collío, muere tras recibir disparo en las cercanías de Pidima / http://www.mapuexpress.org/2014/10/30/victor-mendoza-collio-muere-tras-recibir-disparo-en-las-cercanias-de-pidima

El asesinato de Rodrigo Melinao y los consecutivos extraños siniestros ¿De quién es este modus operandi? http://www.mapuexpress.net/content/news/print.php?id=10790

El asesinato de Matías Catrileo: Plan de Operaciones encubiertos de un estado siniestro / http://www.mapuexpress.net/content/news/print.php?id=5140

CHILE: LOS GOLPES DEL GOPE EN SEUDA DEMOCRACIA / Leer Más: http://www.g80.cl/noticias/columna_completa.php?varid=5688

Pueblo Mapuche en El Mercurio: Racismo, discriminación y Maquinación noticiosa / http://www.mapuexpress.net/content/publications/print.php?id=2501

“Han desplegado (nuevamente) un perverso manto de turbiedades en territorio Mapuche” / http://meli.mapuches.org/spip.php?article2859

Ver imagen de panfleto aparecido en zona de Arauco




- See more at: http://mapuexpress.org/2014/10/31/el-asesinato-de-comunero-mapuche-la-responsabilidad-politica-de-carabineros-y-las#sthash.yMbdcMYx.dpuf

Familia de Víctor Mendoza Collío: Él fue asesinado afuera de su casa y no en una riña entre comunidades como se dijo


El comunero mapuche Víctor Mendoza Collío, fue asesinado a pocos metros de la puerta de su casa por dos desconocidos. Familia desmiente la versión dada por la prensa y carabineros de que se tratara de una riña entre comunidades y señalan que él fue asesinado y piden justicia. 

Este miércoles 29 de octubre fue asesinado a metros de la puerta de su casa el comunero mapuche Víctor Mendoza Collío (27) en la comunidad Requem Pillan, al interior de la localidad de Pidima, comuna de Ercilla, IX región. El comunero recibió un disparo mortal de escopeta en la clavícula. Mendoza era werken de su comunidad que se encontraba en un proceso de recuperación de tierras ancestrales. Según su familia y miembros de la comunidad esto se trato de un asesinato perpetrado por desconocidos y no por una riña o enfrentamiento entre comunidades como se ha dicho.
Esto contrasta con la versión que diversos medios en horas de la noche del miércoles publicaban, señalando que el hecho se habría tratado de una “riña” en lo que se denominó “un conflicto entre comunidades”.
La mayoría de dichos medios citaban al capitán de carabineros, Francisco Guzmán, quien en el lugar habría indicado que “la muerte del hombre se habría producido en el contexto de un enfrentamiento entre dos comunidades mapuche” y agregaba que en la zona “había barricadas”.
Versión que fue descartada y desmentida por miembros de la comunidad y por la propia familia del peñi asesinado.
Así lo consignaba ya durante la madrugada el medio Mapuexpress tras haber conversado con cercanos a la familia de Víctor Mendoza Collío, cuestión que nos confirmaron telefónicamente y que en la mañana la misma familia confirmó.
Uno de los contactados señaló tajantemente: “No es problema entre o al interior de su comunidad. Llegaron dos personas a su casa, él salió a atender y luego le dispararon por la espalda. No se sabe quiénes fueron pero los culpables hicieron esta acción a rostro descubierto”.
Por su parte, la hermana de Víctor, Irene Mendoza, descartó también que se tratara de un conflicto entre comunidades y señaló que “nadie de su familia tiene conflicto con otras comunidades”, y precisó: “el fue asesinado delante de su esposa que está embarazada de varios meses, no sabemos quien lo hizo, pero a él lo asesinaron”.
El fiscal de la Región Víctor Paredes señaló que se estaban realizando todas las diligencias para aclarar el hecho y que por el momento no se podía descartar nada.

Un modus operandi que se repite:


Víctor Mendoza Collío, era primo de Jaime Mendoza Collío, joven mapuche quien fuera asesinado con un disparo en la espalda el año 2009 a manos del cabo de carabineros Miguel Jara Muñoz durante el proceso de recuperación del Fundo Santa Alicia en Angol. El uniformado fue absuelto y sigue en la institución, su familia a 5 años del crimen aún clama justicia.
El caso de Víctor trajo inmediatamente a la memoria el asesinato de Rodrigo Melinao Licán, ocurrido el 6 de agosto del 2013. Asesinato que aún sigue impune. Rodrigo fue asesinado en cercanías de su casa ubicada al interior de la comunidad Rayen Mapu, en proceso de recuperación territorial, del Lof  Lolokos, en el sector de Pidima, comuna de Ercilla. Su cuerpo fue encontrado por su familia con impactos de escopeta, en el mismo lugar se encontraron los cartuchos y se estableció que le dispararon a poca distancia.
asesinato rodrigo melinao
Asesinato de Rodrigo Melinao Licán
La familia y la comunidad señalaron en aquella oportunidad lo extraño del crimen en una zona en donde existe gran contingente policial, de hecho la comunidad era sistemáticamente monitoreada y controlada, y a pocos metros de un camino ocupado casi exclusivamente por la policía.
Además su tumba, ubicada en el cementerio de Chequenko, ha sido también profanada 2 veces, siendo también rayada con frases ofensivas, lo que ha sido denunciado por su familia sin obtener mayores respuestas ni diligencias.
Este 2014 sucedió un hecho similar, también en Ercilla. El reconocido werkén Hugo Melinao fue atacado el 2 de de octubre de este año por civiles quienes lo hirieron a bala (en su pierna) afuera de su casa de la comunidad en Pailahueke para luego huir. Melinao y su familia llamaron a la ambulancia que luego lo trasladó hasta el hospital. Mientras Melinao estaba en el hospital su casa fue allanada y el quedó en calidad de detenido. Esto tras ser acusado de ser responsable de quemar unos camiones que transitaban por la ruta 5 sur, a la altura de Pailahueke.
Melinao había comentado a sus cercanos las sospechas de que querían atentar contra el. Esto tras la quema de la casa que días antes ocupara Melinao, hecho ocurrido el 18 de julio de este año. En dicho lugar, en el que ya no vivía Melinao, dejaron panfletos con la leyenda “no te gusta quemar casas de personas… di ahora que se siente”.
Hace rato que estamos viendo cosas extrañas en la zona, no descartamos que se este ocupando a personas y que estas estén amparadas por la misma policía para amedrentar y asesinar a peñis que están en procesos de recuperación territorial“, nos señala un miembro de la comunidad Rayen Mapu, denuncia que ya han hecho publica en reiteradas ocasiones sin tener respuesta ni el interés de las autoridades.
La tesis apunta a grupos paramilitares en la zona, cercano a latifundistas, y también a Yanakonas (traidores) que están trabajando con estos grupos y que circulan por la zona sin problemas y con el beneplácito de la policía y las autoridades.
Por su parte el gobierno ha salido rapidamente a decir que el crimen “no tendría un carácter político”. Así lo señaló la presidenta Bachelet desde España quien señaló: “No es muy claro en qué condiciones haya fallecido, sí me han dicho que no tiene un sentido político ni de otro tipo, pero será la justicia, la fiscalía la que determine las causas” según consigna La Nación
El cuerpo de Víctor Mendoza Collío fue llevado hasta el Servicio Médico Legal de Temuco para realizar la autopsia y peritajes correspondientes. La familia y su comunidad exigen que no haya impunidad como en otros casos y que haya justicia y castigo para él.



- See more at: http://www.radiovillafrancia.cl/familia-de-victor-mendoza-collio-el-fue-asesinado-afuera-de-su-casa-y-no-en-una-rina-entre-comunidades-como-de-dijo#sthash.4zae8yPV.dpuf


          Debian / Ubuntu Linux Install and Configure Remote Filesystem Snapshot with rsnapshot Incremental Backup Utility   
I would like to configure my Debian box to backup two remote servers using rsnapshot software. It should make incremental snapshots of local and remote filesystems for any number of machines on 2nd hard disk located at /disk1 ( /dev/sdb2). How do I install and use rsnapshot on a Ubuntu or Debian Linux server?
          Welcome to the Laughing Horse Blog.   
This is a blog for the Laughing Horse Books and Video Collective. We are a All Volunteer Lead and Run Bookstore. We have a meeting space available for like minded groups to either rent out or use free of charge. There are 2 public access terminals(computers-the public can use for free) running and testing free, open source software, (Ubuntu-Linux), we also send out a Free, Open Wi-Fi signal, so you can bring your own Wi-Fi ready device and surf away. We do enforce a "Safer Space" policy, and expect those who enter the bookstore to work with us on keeping the place a "safer space" to come to, if you have questions about it, please ask the volunteer staffing the shift at the bookstore. So come on down, you might wanna call first, and make sure someones there to let you in. Since we are all volunteers, we are not always able to fill all-shifts, for the whole shift. Thanks for your understanding. Mike-d.(part of the Laughing Horse Collective).

          Comment on Beta News – Flash Player NPAPI for Linux by AngryPenguinPL   
Tested on Ubuntu 16.04 x64 with Unity.
          Petunjuk Cara Menginstal Komputer Sendiri   
Dewasa ini komputer telah digunakan secara meluas oleh berbagai kalangan masyarakat. Hal yang sering menjadi pertanyaan awal adalah bagaimana cara menginstal komputer. Berikut adalah langkah-langkah yang bisa dijadikan pedoman umum untuk menginstalasi program komputer, baik itu untuk install pc maupun install laptop.



    Cara Instalasi Komputer Sendiri
  1. Cek kelengkapan hardware. Apakah komponen komputer atau laptop Anda sudah terakit dengan benar, lengkap, dan memenuhi persyaratan minimal untuk menginstal sistem operasi tertentu. Lebih baik lagi jika perangkat keras pendukung seperti kartu jaringan, printer, scanner, dan sebagainya telah terpasang di komputer sebelum memulai instalasi. Tujuannya agar nantinya sistem operasi dapat otomatis mendeteksi secara dini perangkat-perangkat komputer yang Anda miliki.
  2. Instal baru (fresh install) atau instal ulang (reinstall)? Pastikan apakah instalasi akan Anda lakukan pada komputer baru yang belum memiliki sistem operasi dan program komputer, atau dilakukan pada komputer yang sebelumnya sudah memiliki sistem operasi plus program aplikasi dan data-datanya. Proses instal menjadi lebih mudah dan cepat jika dilakukan pada komputer baru. Jika yang akan Anda lakukan adalah instal ulang (karena sistem operasi sebelumnya sudah bermasalah atau ingin berganti sistem operasi lain), maka pastikan data-data file dokumen, gambar, foto, film, dan sebagainya yang telah Anda miliki sudah Anda backup dalam keping CD, flash disk, atau minimal berada di lokasi yang berbeda dengan partisi harddisk untuk instalasi nantinya.
  3. Tentukan sistem operasi yang akan Anda instal ke komputer. Apakah Anda akan menginstal OS Windows XP (Windows terpopuler), Windows 7 (Windows terbaru), atau menginstal Ubuntu Linux (sistem operasi gratis/opensource). Jika ingin menginstal Windows, pastikan Anda telah menyiapkan CD original dari Windows yang Anda beli, termasuk mencatat nomor seri yang harus diisikan saat proses install komputer berjalan. Anda bisa mempertimbangkan menggunakan sistem operasi Ubuntu karena bersifat gratis dan lengkap dengan program aplikasi pendukungnya. Jika komputer Anda tidak memiliki drive CD/DVD (misalnya netbook biasanya tidak memiliki piranti ini), maka pertimbangkan apakah Anda perlu membeli/meminjam drive CD/DVD eksternal atau menggunakan flashdisk sebagai sumber instalasinya. Adapun cara mempersiapkan file installer sistem operasi di keping flashdisk mungkin akan dibahas di posting tersendiri. 
  4. Persiapkan cd driver. Nantinya, setelah sistem operasi terinstal ke komputer, kemungkinan tampilan layar komputer belum optimal, suara tidak terdengar, printer belum bisa dipakai, dan sebagainya. Anda akan memerlukan CD atau disket driver untuk menginstal driver VGA dan sound card, driver printer, serta driver piranti tambahan lainnya. Jika Anda tidak memilikinya, Anda bisa mendownloadnya di situs internet produsen hardware terkait atau mencari alternatifnya dengan menggunakan mesin pencari.
  5. Sediakan software dan program aplikasi komputer yang Anda perlukan. Misalnya, selain sistem operasi, Anda akan membutuhkan program office, software desain grafis, antivirus, aplikasi bantu untuk akses internet, game favorit, dan tool perawatan sistem komputer. Jika Anda menggunakan OS Ubuntu Linux, kemungkinan semua sudah tersedia dan Anda bisa melewatkan tahapan ini.
  6. Tambahkan program lain sesuai kebutuhan. Seiring berjalannya waktu, Anda mungkin juga ingin mengetahui cara menginstal font, memasang flash player, PDR reader, bahkan memerlukan instalasi tool pengembangan seperti XAMPP untuk merancang website. Teknik-teknik instalasinya perlu Anda pahami dan Anda kuasai.
  7. Buang software yang tidak diperlukan. Pastikan Anda bisa membuang instalasi (uninstall) program-program yang telah Anda install sebelumnya. Kemungkinan, beberapa software yang pernah Anda instal atau tanpa sengaja turut terinstal ternyata tidak Anda butuhkan atau bahkan tidak pernah Anda pakai. Dengan membuang program yang tidak diperlukan, selain akan memperbesar ruang kosong harddisk juga akan mempercepat kinerja komputer Anda. 
Mengingat begitu panjang dan banyak prosedur dan langkah yang harus dilakukan untuk setiap tahapan di atas, saya akan mencoba membahasnya satu-persatu di posting tersendiri. Selamat mengikuti dan selamat mencoba!

          Petunjuk Cara Menginstal Komputer Sendiri    
Dewasa ini komputer telah digunakan secara meluas oleh berbagai kalangan masyarakat. Hal yang sering menjadi pertanyaan awal adalah bagaimana cara menginstal komputer. Berikut adalah langkah-langkah yang bisa dijadikan pedoman umum untuk menginstalasi program komputer, baik itu untuk install pc maupun install laptop.



    Cara Instalasi Komputer Sendiri
  1. Cek kelengkapan hardware. Apakah komponen komputer atau laptop Anda sudah terakit dengan benar, lengkap, dan memenuhi persyaratan minimal untuk menginstal sistem operasi tertentu. Lebih baik lagi jika perangkat keras pendukung seperti kartu jaringan, printer, scanner, dan sebagainya telah terpasang di komputer sebelum memulai instalasi. Tujuannya agar nantinya sistem operasi dapat otomatis mendeteksi secara dini perangkat-perangkat komputer yang Anda miliki.
  2. Instal baru (fresh install) atau instal ulang (reinstall)? Pastikan apakah instalasi akan Anda lakukan pada komputer baru yang belum memiliki sistem operasi dan program komputer, atau dilakukan pada komputer yang sebelumnya sudah memiliki sistem operasi plus program aplikasi dan data-datanya. Proses instal menjadi lebih mudah dan cepat jika dilakukan pada komputer baru. Jika yang akan Anda lakukan adalah instal ulang (karena sistem operasi sebelumnya sudah bermasalah atau ingin berganti sistem operasi lain), maka pastikan data-data file dokumen, gambar, foto, film, dan sebagainya yang telah Anda miliki sudah Anda backup dalam keping CD, flash disk, atau minimal berada di lokasi yang berbeda dengan partisi harddisk untuk instalasi nantinya.
  3. Tentukan sistem operasi yang akan Anda instal ke komputer. Apakah Anda akan menginstal OS Windows XP (Windows terpopuler), Windows 7 (Windows terbaru), atau menginstal Ubuntu Linux (sistem operasi gratis/opensource). Jika ingin menginstal Windows, pastikan Anda telah menyiapkan CD original dari Windows yang Anda beli, termasuk mencatat nomor seri yang harus diisikan saat proses install komputer berjalan. Anda bisa mempertimbangkan menggunakan sistem operasi Ubuntu karena bersifat gratis dan lengkap dengan program aplikasi pendukungnya. Jika komputer Anda tidak memiliki drive CD/DVD (misalnya netbook biasanya tidak memiliki piranti ini), maka pertimbangkan apakah Anda perlu membeli/meminjam drive CD/DVD eksternal atau menggunakan flashdisk sebagai sumber instalasinya. Adapun cara mempersiapkan file installer sistem operasi di keping flashdisk mungkin akan dibahas di posting tersendiri. 
  4. Persiapkan cd driver. Nantinya, setelah sistem operasi terinstal ke komputer, kemungkinan tampilan layar komputer belum optimal, suara tidak terdengar, printer belum bisa dipakai, dan sebagainya. Anda akan memerlukan CD atau disket driver untuk menginstal driver VGA dan sound card, driver printer, serta driver piranti tambahan lainnya. Jika Anda tidak memilikinya, Anda bisa mendownloadnya di situs internet produsen hardware terkait atau mencari alternatifnya dengan menggunakan mesin pencari.
  5. Sediakan software dan program aplikasi komputer yang Anda perlukan. Misalnya, selain sistem operasi, Anda akan membutuhkan program office, software desain grafis, antivirus, aplikasi bantu untuk akses internet, game favorit, dan tool perawatan sistem komputer. Jika Anda menggunakan OS Ubuntu Linux, kemungkinan semua sudah tersedia dan Anda bisa melewatkan tahapan ini.
  6. Tambahkan program lain sesuai kebutuhan. Seiring berjalannya waktu, Anda mungkin juga ingin mengetahui cara menginstal font, memasang flash player, PDR reader, bahkan memerlukan instalasi tool pengembangan seperti XAMPP untuk merancang website. Teknik-teknik instalasinya perlu Anda pahami dan Anda kuasai.
  7. Buang software yang tidak diperlukan. Pastikan Anda bisa membuang instalasi (uninstall) program-program yang telah Anda install sebelumnya. Kemungkinan, beberapa software yang pernah Anda instal atau tanpa sengaja turut terinstal ternyata tidak Anda butuhkan atau bahkan tidak pernah Anda pakai. Dengan membuang program yang tidak diperlukan, selain akan memperbesar ruang kosong harddisk juga akan mempercepat kinerja komputer Anda. 
Mengingat begitu panjang dan banyak prosedur dan langkah yang harus dilakukan untuk setiap tahapan di atas, saya akan mencoba membahasnya satu-persatu di posting tersendiri. Selamat mengikuti dan selamat mencoba!

          KidsRuby running on the Raspberry Pi   

KDE Project:

I've been following the development of the Raspberry Pi computer, which is a small ARM based device costing only 25-30 euros. It is designed to plug into TVs, and is targeted at teaching kids to learn programming. I was excited to read today that the KidsRuby programming environment is running on a Raspberry Pi. You can read some of Liz's other blogs for more details about the Raspberry Pi's progress.

That is a subject which is dear to my heart. I don't personally have any kids, but I'm very interested in how they learn programming and how young they can get started. I came to Gran Canaria six years ago to help Agustin Benito put KUbuntu based computers into all the Canarian schools, and to learn Ruby On Rails programming at a great company called 'Foton'. That seemed the perfect combination to me.

I first learned programming when I was 19. I was studying for a degree called 'Philosophy with Cognitive Studies', and we were taught to program by people who were very interested in how people learned to program. The course taught us about 'constructivist' educational theories by Jean Piaget, Symour Papert and so on. The idea is that you learn by doing things. A computer turtle that you can program to move round the floor, or wooden blocks that you assemble into patterns to discover about maths and geometry are the perfect tools for the constructivist approach. This is instead of sitting there in class passively while a teacher stuffs your head with facts, and then at the end of the year you have a massive load of exams to test whether you have learned the facts correctly. Sadly dumb fact learning of a 'national curiculum' and SATS tests are now the norm in 'politician driven' educational systems as in the UK or the USA. Instead of a glorious constructivist future made possible by the learning augmentation features of personal computers that people dreamed of in the 1960s and 1970s, we've ended up with a boring 19th century style dead end to education.

The cognitive studies students were the first guinea pigs for a new way of teaching programming by using the POP-11 language interactively via teletypes in a Unix environment. It seems a bit strange today to think of a teletype as an advanced device, but in the mid-1970s video terminals were still really expensive and most students learned programming using punched cards that they submitted to a central facility. I had gone to some engineering lectures about FORTRAN before I switched to Philosophy, and because it used punched cards the whole subject seemed about the dullest thing you could imagine. But using a teletype interactively to work though self taught assignments was a different thing altogether. You could type a line of code into the computer, hit return and it did something.

In the early 1980s cheap computers running BASIC became widely available. But because I am a bit of a programming snob, I wasn't very interested in BASIC as it seemed like warmed over FORTRAN (it was) to me. I still wanted my own computer but none of them had much appeal. Even though these machines weren't for me, it did mean in the 1980s that many people got started in programming with their Sinclairs, Commadore 64s, BBC Micros and so on. It was a golden age of programming in schools. Once those machines were replaced by Windows machines in schools in the 1990s, those programming environments weren't available. Instead of teaching themselves BASIC, kids would learn about Microsoft Word instead, and that is very sad. Instead of finding about about the fun world of programming, they were learning how to be an Office drone. I once showed my nieces what 'hello world' looked like in QtRuby. They were about 15 and 17 years old and still at school at the time, but neither of them had ever seen a computer program before. They did say that it looked quite english-like and they could work out what it was doing.

After learning programming at University and getting my first programming job, the next big jump in my programming education came when I was 27, and I bought one of the first Apple Macintoshs. I ran MacPascal on it and taught myself Pascal using a book called 'Oh! Pascal' which I think is a classic. After learning Pascal I was able to get myself a programming job as a Pascal programming. That became a pattern for the rest of my career. I bought a NeXT computer, learned Objective-C and then became a professional Objective-C programmer. More recently I learned Ruby by writing the QtRuby bindings on a MacBook and then became a Rails programmer.

Once I had my own computers and my chosen programming environments to learn about, I was able to set the direction of my career. That probably seems fairly normal today, but I don't think it was how most programmers of my age progressed in their careers. Normally you would rely on your current employer to send you on programming courses. Over the years thanks to the generosity of your employer you were able to progress from being a COBOL-74 programmer to a COBOL-85 programmer who also knew RPG. Exciting stuff!

So there is a massive difference between having your own computer that you can program, as opposed to having to borrow someone else's machine before you can learn anything new. This is not only a problem for grown ups like myself, but it is a problem if you are about 9 years old and want to start programming. Your parents probably have a Windows machine that doesn't have any built in programming environment, and even if it did they might not be very happy to let a young kid loose on it. This is where the Raspberry Pi comes in. It is cheap enough for every kid to buy, and they can do what they like with it as it doesn't matter if they fill up the file system with logging messages when their program loops, or how many viruses they accidently download as it doesn't run viruses.

In the blog about KidsRuby running on the Raspberry Pi there is a link to a talk by Ron Evans that he gave at the Golden Gate Ruby Conference 2011 called KidsRuby: Think of the Children. I would have been very interested in the talk anyway, but what made my day is that the KidsRuby application is written in QtRuby, and it even gets a mention in one of the slides!

It made me think I really must try and get QtRuby 3.0 moving again. We had a mini-BOF at the Berlin Desktop Summit where 5 of us got together and then Arno ran through a description of the basics of how Smoke worked, followed by me running through the QtRuby 3.0 code. It is designed to be much easier to follow than the QtRuby 2.x code, and so that the 'bus factor' is increased. If I happen to drink 15 pints of beer one evening and fall off a balcony then it might be the end for me, but at least there would be some chance that others would be able to carry on maintaining QtRuby. It will also be more modular and instead of always loading all the Qt libraries, it will only load the ones you want. That should help memory constrained devices like the Raspberry Pi. I also need to make sure that QtRuby can be built as gems on all platforms right from the start. Ron Evans said in his talk that packaging KidsRuby was one of the trickiest aspects of the project. He sometimes asks questions as 'deadprogram' on the #qtruby irc channel about packaging issues, but I don't think I've personally given it the priority it deserves. Ryan Melt has done a great job with the qtbindings project on github where you can get cross platform gems, but I must try and structure QtRuby 3.0 to make it easy to build gems from the start.


          Screen Locking in Fedora Gnome 3   

KDE Project:

I wanted to try out Fedora 15 with Gnome 3 running under VirtualBox on my iMac before I went to the Berlin Summit. I've already tried using Unity-2d on Ubuntu, and I thought I if I had some real experience with Gnome 3 as well, I could have a bit more of an informed discussion with our Gnome friends and others at the Summit.

Sadly it didn't go all that well. Installing the basic distro went fine, but I couldn't manage to install VirtualBox Guest tools so that 3D graphics acceleration would work. The tools built fine, but the 'vboxadd' kernel module was never installed and there was no clue why in the build log. Then while I made a first attempt at writing this blog, Virtual Box crashed my machine and I lost everything. So it looks like I'll stick with VMWare for a bit yet even though it doesn't have 3D acceralation for Linux.

I discovered that Gnome 3 locks the screen, when it goes dim, by default just like I found Kubuntu and Mandriva did recently. I had a look at where that option is defined and it was under 'Screen'. So Screen locking was under 'Screen' and I managed to guess where it was first time. Score some points for Gnome usuability vs KDE there! Even so I still don't think it is a 'Screen' thing it is a 'Security' thing. Interestingly Ubuntu doesn't lock the screen by default. Does that mean Fedora and KDE are aimed at banks, while Ubuntu is more aimed at the rest of us?

In contrast, I had spent a lot of time going round the KDE options and failing to find it how to turn off screen locking. Thanks to dipesh's comments on my recent blog about virtual machines and multi booting USB sticks he pointed out that it was under 'Power Saving', and I managed to turn it off on my Mandriva install. There were also options under power saving to disable the various notifications that had annoyed me so much like the power cable being removed. Excess notifications are a real pain and it is very important to be disiplined about when to output them in my opinion. It feels like some programmer has mastered the art of sending notifications, and they want to show that skill off to the world.

Another app that outputs heroic numbers of notifications is Quassel when it starts up. I get a bazillion notifications about every channel it has managed to join, that I really, really don't care about. I think developers need to ask the question 'if the user was given notification XXX how would they behave differently, compared to how they would have behaved if they never received it in the first place?'. For instance, I can't imagine what I would do differently if I am told the power cord is disconnected, when it was me who just pulled it out. Maybe it would be useful if you had a computer where the power cord kept randomly falling out of its socket. Or with Quassel, do I sit watching the notfications for the twenty different IRC channels that I join waiting for '#kde-devel' so I can go in immediately. In fact I can't do anything with my computer because it is jammed up with showing me notfications.

Unlike Kubuntu, Mandriva was able to suspend my laptop when the lid was shut even when the power cord was connected.

The default behaviour on both Kubuntu and Mandriva with my HP 2133 netbook when I opened the lid, was to wake up with lots of notifications that I wasn't interested in, force me to enter my password in a screen lock dialog that I didn't want, and then immediately go back to sleep. This was actually the last straw I had with Kubuntu, and I was really surprised that Mandriva 2011 was exactly the same.

I had a look at my Mac System Preferences and couldn't find any way to lock the screen. The closest equivalent was in the 'Security' group that allowed you to system to log you out after x minutes of inactivity. That option certainly isn't on by default. Macs go to sleep when you close the lid, and wake up when you open the lid without a lot of fuss or bother.

Anyhow I look forward to seeing everyone in Berlin..


          Multiple everything - using VMWare, VirtualBox and Multisystem usb drives   

KDE Project:

Recently there was an post on Hacker News about collective nouns for birds in English. I run loads of virtual machines on my computer and I wonder what they should be called - 'a herd of virtual machines'? I have the mediocre Windows 7 Home Premium, and I wonder if that should be called a 'A badling of windows' after the phrase 'A badling of ducks'.

The big change for me in my computing evironment recently has been using virtual machines all the time instead of setting up my computers with multiple boot options. For work I use a 27 inch Macintosh with 8 GB of memory, running Windows 7 Home Premium and Kubuntu 11.04 under VMWare as guests under Mac OS X as the host. We have moved from scarcity in computer resources for programmers to a world where even the most under powered netbook can handle most of the things I need to do.

I spent most of last weekend preparing my HP 2133 netbook for use at the forthcoming Desktop Summit. I wanted to get it working well in advance as I had a lot of trouble with my netbook at the recent Qt Contributors Summit. At that conference I couldn't even get WiFi working for the first day or two, and it was only because I happened to bump into the awesome Paul Sladen from Canonical that I managed to get it working to a reliable standard at all.

Talking to Paul he didn't think he was any kind of power user (he works for Canonical as a UI designer), and that his suggestions to me about how to sort out my machine should be obvious. I'm not sure about what exactly I'm good at, but I think it is only programming and I am not a very good systems administrator (or a UI designer for that matter). But if I have a lot of trouble doing obvious things with my Linux portable I think you can assume anyone at all normal with be having even more trouble.

I found out about the Multisystem project, and managed to create a USB stick with Kubuntu 11.04, Ubuntu 11.04, OpenSUSE 11.4, Mandriva 2010, Mandriva 2011 RC2, Fedora 15, Debian Squeeze 6.0.1. Then I tried running each of these distributions in turn and trying to install them onto my HP 2133.

My Great White Hope was SUSE because that was the distribution that my HP 2133 originally came with. I never got it running when I first tried to boot my new HP 2133 because I got a 'grub error 18' or similar, after i got it home and tried to boot it - that would have put off 99% of potential Linux users straight away. The install of OpenSUSE 11.4 started and then after a while it died. Oh dear.

Next up was Fedora 15, and I got as far as the initial screen after being warning that my machine wasn't powerful enough to run Gnome 3. I started the 'install to hard disk' tool, and it died after a minute or two. Not enough memory, some other problem? I've no idea.

I was beginning to run out of ideas and then I thought of Mandriva and tried to install Mandriva 2010. That went well until I tried to boot and the Mandriva grub 1.0 install clashed with the grub 2.0 install of Kubuntu that I had on the second partition in my netbook. The naming scheme for partitions has changed between grub 1.0 and grub 2.0 and it isn't a good idea to combine them.

Large parts of my weekend were beginning to disappear even though it should be simple for an expert user to set up a netbook. Then I found out that Mandriva 2011 uses grub 2.0 and installing that worked great.

Back to bird collective nouns - I clearly had a 'flight of Mandrivas' here. I like the changes that Mandriva have made to the KDE UI. They have their own custom Plasma panel. It is black and doesn't look horrible like the default grey Plasma panel does. It doesn't have multiple virtual desktops by default. I thought I should just try the default UI at the Desktop Summit and see how I got on with it.

I wonder what has gone wrong with KDE Usuability considering we have a usuability expert on the KDE eV board. I find trivial usuabiltiy problems with KDE really annoying. For instance, if I set up KDE Wallet why do I have to give it a different password once I have logged in? Why doesn't it trust me?

If I try and make my laptop suspend while it is still connected to the mains power by shutting the lid, why doesn't it just suspend? I have to disconnect the power cable and then shut the lid again. Then plug the mains cable back in to ensure the laptop doesn't run out of power. Why do I get a completely useless notification about 'your laptop has had its power removed'? Of course I know that because I just removed the power cord to get round the problem of the laptop not suspending properly.

When my laptop wakes up it asks me for a password before it unlocks the screen. Why does it lock my screen by default? I don't know. I used to work on real time trading systems at a bank, and there was certainly a policy there of using screen lockers. But banks must be about 1% of KDE's users and so I have no idea why locking the screen is something a non-expert user should be confronted with. I haven't worked out how to disable screen locking even though it is a complete pain in the arse.

I have no idea why the KDE menus still have underlines in them to allow you to navigate without a mouse. That made sense when mice were rare things in the early eighties and power users without mice could navigate through Windows 1.0, but now about 1% of people use that option why are the other 99% being exposed to such an ugly UI.

I'm looking forward to the Berlin Conference and I hope we can make it an opportuntity to sort out KDE's 1980s and 1990's UI problems like I've just described. Clearly Plasma Active is in advance of everything else in my opinion, but the normal widget based normal KDE UI isn't. I still think we can do better than Mac OS X Lion and I look forward to hearing the opinions of KDE UI experts at the conference.


          GObject to Qt dynamic bindings   

KDE Project:

A couple of years ago I started on a project to create a Qt language binding using the Gnome GObject Introspection libraries to generate QMetaObjects, so that it would be possible to base a language binding on a dynamic bridge between the two toolkits. I started a project in the KDE playground repo, and then Norbert Frese joined in with a companion project called go-consume that was based more on static C++ code generation. I wrote some blogs about how the QMetaObjects creation worked; Creating QMetaObjects from GObject Introspection data, QMetaObject/GObject-introspection inter-operability progress and QMetaObject::newInstance() in Qt 4.5.

We were hoping to give a talk at the Gran Canaria Summit about GTK bindings for Qt, but it didn't get accepted. At the time, I was so busy doing other things that I never managed to follow through and complete the bindings. So the project had languished for the past couple years. Recently I've got going with it again and the project is now being actively developed on Launchpad as smoke-gobject.

I went to the recent Ubuntu UDS conference in Budapest, which was great. The were loads of talks, meetings and other events and I was amazed that Canonical and the Ubuntu community apparently manage to put on an event of this size every six months.

My connection with Ubuntu was that I had been doing some work on fixing bugs with the Unity-2d desktop shell, and had made a start with understanding the code. That project is written in Qt C++ with a lot of QML too. What I found interesting was that it also used Gnome libraries and needed to wrap them in a more Qt-developer friendly Qt/C++ layer. That made me think of the bindings project I never finished a couple of years ago. I discussed doing a binding with the Unity-2d guys at UDS, and they seemed keen on the idea. There are two desktop shell projects for Ubuntu, one called 'Unity-2d' which is the Qt C++/QML one, and a pure Gnome project called 'Unity-3d' which is similar but has more advanced graphics requirements. The Ubuntu guys wanted to create a library that would be written using Gnome apis that could be shared by both Unity-2d and Unity-3d. So it sounded like a perfect test project to see if an automatically generated binding would be possible.

It is now possible to create instances, call instance methods, call methods that are in a namespace, get and set Qt properties that map on to GObject properties, connect to slots and signals in the Qt manner. The marshalling code is pretty complete, although the GObject Introspection marshalling options are pretty large and complex, and it has taken a fair bit of time to get it working.

All that stuff happens inside the QMetaObjects via arcane methods such as QObject::qt_metacall(), and it isn't very easy to write about 'black boxes' of code that do all this exotic stuff. Just recently though, I have finally got as far as the C++ code generation and there is finally something I can point to and describe, that makes it relatively easy to follow what the project is about.

So on principle of 'a code snippet is worth a thousand words', here is a sample of what you get if you run the GObject Introspection description for 'Gtk' through the smoke-gobject runtime. This is the header for the Gtk::Button class:


#ifndef GTK_BUTTON_H
#define GTK_BUTTON_H

#include "gtk_bin.h"

namespace Gtk {

class Button : public Gtk::Bin {
    Q_OBJECT
    Q_PROPERTY(bool focusOnClick)
    Q_PROPERTY(Gtk::Widget* image)
    Q_PROPERTY(Gtk::PositionType imagePosition)
    Q_PROPERTY(QString label)
    Q_PROPERTY(Gtk::ReliefStyle relief)
    Q_PROPERTY(bool useStock)
    Q_PROPERTY(bool useUnderline)
    Q_PROPERTY(float xalign)
    Q_PROPERTY(float yalign)
public:
    Button();
    Button(QString label);
    Button(QString stock_id);
    Button(QString label);

public slots:
    void pressed();
    void released();
    void clicked();
    void enter();
    void leave();
    void setRelief(Gtk::ReliefStyle newstyle);
    Gtk::ReliefStyle relief();
    void setLabel(QString label);
    QString label();
    void setUseUnderline(bool use_underline);
    bool useUnderline();
    void setUseStock(bool use_stock);
    bool useStock();
    void setFocusOnClick(bool focus_on_click);
    bool focusOnClick();
    void setAlignment(float xalign, float yalign);
    void alignment(float& xalign, float& yalign);
    void setImage(Gtk::Widget* image);
    Gtk::Widget* image();
    void setImagePosition(Gtk::PositionType position);
    Gtk::PositionType imagePosition();
    Gdk::Window* eventWindow();

signals:
    void activate();
    void clicked();
    void enter();
    void leave();
    void pressed();
    void released();
};

}

#endif // GTK_BUTTON_H

To me it doesn't look bad - you have some understandable camel case method names, slot, signals and properties that all do what you would expect them to do. There are a couple of problems with this particular code snippet that need sorting out.

Firstly, notice that there are two constructors with exactly the same arguments, and that wouldn't compile. This is because in the underlying library there are two constructor functions for Gtk::Button that have the same arguments; new_with_label() and new_with_mnemonic() both taking a 'gchar*' utf8 argument. How is a bindings author supposed to sort that out? I'm not sure yet. Certainly many languages like Ruby or Python will have the same issue where the constructors are named after the class instances they construct.

A second problem is with enclosing classes in namespaces like 'Gtk::' or 'GObject::' where there are already C structs called the same thing. So I could call them something like 'Qt::Gtk::Button' or lowercase the namespace to 'gtk::Button' - I haven't decided what to do yet.

The generated code for the .cpp part of the Gtk::Button class looks like this:


#include "gtk_button.h"

namespace Gtk {

static QMetaObject *_staticMetaObject = 0;

const QMetaObject *Button::metaObject() const
{
    if (_staticMetaObject == 0)
        _staticMetaObject = (QMetaObject*) Smoke::Global::findMetaObject("Gtk::Button");
    return _staticMetaObject;
}

void *Button::qt_metacast(const char *_clname)
{
    if (!_clname) return 0;
    if (!strcmp(_clname, metaObject()->className()))
        return static_cast<void*>(const_cast< Button*>(this));
    return Gtk::Bin::qt_metacast(_clname);
}

int Button::qt_metacall(QMetaObject::Call _c, int _id, void **_a)
{
    return Smoke::GObjectProxy::qt_metacall(_c, _id, _a);
}

Button::Button()
{
    Button *_r = 0;
    void *_a[] = { &_r };
    metaObject()->static_metacall(QMetaObject::CreateInstance, 0, _a);
    takeIdentity(_r);
}

Button::Button(QString _t1)
{
    Button *_r = 0;
    void *_a[] = { &_r, const_cast<void*>(reinterpret_cast<const void*>(&_t1)) };
    metaObject()->static_metacall(QMetaObject::CreateInstance, 1, _a);
    takeIdentity(_r);
}

...

void Button::clicked()
{
    void *_a[] = { 0 };
    qt_metacall(QMetaObject::InvokeMetaMethod, 314, _a);
}

...

void Button::setRelief(Gtk::ReliefStyle _t1)
{
    void *_a[] = { 0, const_cast<void*>(reinterpret_cast<const void*>(&_t1)) };
    qt_metacall(QMetaObject::InvokeMetaMethod, 317, _a);
}

...

If you're familiar with code generated by the moc tool, it should look pretty similar. However, with the standard moc, a qt_metacall() function is generated which calls all the slots and properties in the class via a big case statement. Instead in the code above, each slot calls qt_metacall() - ie it works in reverse. No code needs to be generated for the signals and properties as that is all handled by the smoke-gobject runtime.

There is plenty for scope for optimization such as calling the GObject C functions directly where the marshalling is pretty simple. So I think in the long term in can be made efficient although the first version might be slow. I haven't adding virtual method overrides yet, but that shouldn't be too hard as all the info to generate the code can be got from the G-I typelibs.

I you want to checkout and build the project you will need have a GObject Introspection and GObject/Gtk development environment. It is built with cmake, and so it should be just a matter of creating a 'build' directory in the project and typing 'cmake ..' in there. There is a test for the runtime that uses a library that is part of G-I called 'libeverything' and is intended to be a torture test for bindings authors to use to test their code. In the initTestCase() method in tests/everything/tst_everything.cpp you will see this:



void tst_Everything::initTestCase()
{
    int id = qRegisterMetaType();
    Smoke::Global::initialize();
    everythingNamespace = new Smoke::GObjectNamespace("Everything");
}

It will generate the .h/.cpp sources for the Everything namespace. If you want to have a look at what it does with a namespace like Gtk or Gst you can add a 'Smoke::GObjectNamespace * gtkNamespace = new Smoke::GObjectNamespace("Gtk");' line to the above method.

I am going to the Qt Contributor's Summit this week, and one of the topics for discussion is Interoperability with non-Qt code - Should Qt have better interoperability with (GTK+, Boost, ..)?' run by Jeremy Katz. I'm looking forward to getting some feedback..


          基于Spring Boot, Axon CQRS/ES,和Docker构建微服务   
这是一个使用Spring Boot和Axon以及Docker构建的Event Sorucing源码项目,技术特点:
1.使用Java 和Spring Boot实现微服务;
2.使用命令和查询职责分离 (CQRS) 和 Event Sourcing (ES) 的框架Axon Framework v2, MongoDB 和 RabbitMQ;
3.使用Docker构建 交付和运行;
4.集中配置和使用Spring Cloud服务注册;
5.使用Swagger 和 SpringFox 提供API文档

项目源码:GitHub

工作原理:
这个应用使用CQRS架构模式构建,在CQRS命令如ADD是和查询VIEW(where id=1)分离的,在这个案例中领域部分代码已经分离成两个组件:一个是属于命令这边的微服务和属性查询这边的微服务。

微服务是单个职责的功能,自己的数据存储,每个能彼此独立扩展部署。

属于命令这边的微服务和属性查询这边的微服务都是使用Spring Boot框架开发的,在命令微服务和查询微服务之间通讯是事件驱动,事件是通过RabbitMQ消息在微服务组件之间传递,消息提供了一种进程节点或微服务之间可扩展的事件载体,包括与传统遗留系统或其他系统的松耦合通讯都可以通过消息进行。

请注意,服务之间不能彼此共享数据库,这是很重要,因为微服务应该是高度自治自主的,这样反过来有助于服务能够彼此独立地扩展伸缩规模。

CQRS中命令是“改变状态的动作”。命令的微服务包含所有领域逻辑和业务规则,命令被用于增加新的产品或改变它们的状态,这些命令针对某个具体产品的执行会导致事件Event产生,这会通过Axon框架持久化到MongoDB中,然后通过RabbitMQ传播给其他节点进程或微服务。

在event-sourcing中,事件是状态改变的原始记录,它们用于系统来重新建立实体的当前状态(通过重新播放过去的事件到当前就可以构建当前的状态),这听上去会很慢,但是实际上,事件都很简单,执行非常快,也能采取‘快照’策略进行优化。

请注意,在DDD中,实体是指一个聚合根实体。

上面是命令这边的微服务,下面看看查询这边的微服务:
查询微服务一般扮演一种事件监听器和视图角色,它监听到命令那边发出的事件,然后处理它们以符合查询这边的要求。

在这个案例中,查询这边只是简单建立和维持了一个 ‘materialised view’或‘projection’ ,其中保留了产品的最新状态,也就是产品id和描述以及是否被卖出等等信息,查询这边能够被复制多次以方便扩展,消息可以保留在RabbitMQ队列中实现持久保存,这种临时保存消息方式可以防止查询这边微服务当机。

命令微服务和查询微服务两者都有REST API,提供外界客户端访问。

下面看看如何通过Docker运行这个案例,需要 Ubuntu 16.04:
1.Docker ( v1.8.2)
2.Docker-compose ( v1.7.1)

在一个空目录,执行下面命令下载docker-compose:

$ wget https://raw.githubusercontent.com/benwilcock/microservice-sampler/master/docker-compose.yml
注意:不要更改文件名称。

启动微服务:只是简单一个命令:

$ docker-compose up

你会看到许多下载信息和日志输出在屏幕上,这是Docker image将被下载和运行。一共有六个docker,分别是: ‘mongodb’, ‘rabbitmq’, ‘config’, ‘discovery’, ‘product-cmd-side’, 和 ‘product-qry-side’.

使用下面命令进行测试增加一个新产品:

$ curl -X POST -v --header "Content-Type: application/json" --header "Accept: */*" "http://localhost:9000/products/add/1?name=Everything%20Is%20Awesome"

查询这个新产品:

$ curl http://localhost:9001/products/1

Microservices With Spring Boot, Axon CQRS/ES, and Docker


paulwong 2017-02-18 22:00 发表评论

          Ubuntu 9.04 (Jaunty) su Asus EeePC 901   
Con l’uscita della versione 9.04 di Ubuntu Linux, questa distribuzione ha veramente fatto passi da gigante per quanto riguarda il supporto ai netbook. Oltre a rilasciare una versione specificatamente pensata per i piccoli device (che integra di default l’interfaccia Notebook Remix), sono stati inclusi nel kernel tutti i moduli necessari a far funzionare le periferiche […]
          Neue Software Boutique in Ubuntu MATE unterstützt Snaps   

Martin Wimpress hat auf Google Plus verlauten lassen, dass die Entwickler von Ubuntu MATE an einer neuen Software Boutique für Ubuntu MATE 17.10  Artful Aardvark geschraubt haben oder noch schrauben. Sie hoffen, dass die Neue bis zur finalen Version von Ubuntu MATE 17.10 fertig ist. Eine kleine Vorschau gibt es schon. Die Software Boutique wird von Ubuntu MATE Welcome getrennt. Sie ist ein sehr anwenderfreundlicher, aber leicht eingeschränkten Paket-Manager für Ubuntu MATE. Die Software ist sogar noch etwas mehr. Sie schlägt […]

Der Beitrag Neue Software Boutique in Ubuntu MATE unterstützt Snaps ist von Linux | Spiele | Open-Source | Server | Desktop | Cloud | Android.


          Retomando os Podcasts?   

Sim e Não!

Na verdade, não pretendo retomar, no curto prazo, a gravação usual dos podcasts. Por absoluta falta de tempo (e expertise) para fazer algo com a qualidade dos podcasts que tenho ouvido: Vladimir Campos, FalaFreela e Código Livre.

Mas com o Gengibre, me animei de fazer micro-podcasts :-)

Apontador para o Episódio (serviço descontinuado!)

Sempre que sair um micro-episódio (legal) por lá, eu publico o tocador aqui :-)

[atualização - agosto 2016]
Mais um serviço online que foi descontinuado! Mais um exemplo do porquê não devemos confiar nosso conteúdo em serviços de terceiros!
[/atualização - agosto de 2016]

Do Ubuntu eee


          Episódio # 6   

Já está no ar o episódio #6 do Sérgio Podcast. Falo de wasabi, yahoo educação, entre outras coisas. Para acessar os arquivos do podcast e/ou o link do rss aponte seu navegador para http://sergioflima.pro.br/podcasts/

Todos as ligações para os itens abordados tambem estão por lá.

Do Edubuntu


          Füzérradvány    
Munka után.

NEM ÁPRILISI TRÉFA VOLT A ZEMPLÉNI HÉTVÉGE

 

Nem áprilisi tréfa volt a Füzérradványon megrendezésre kerülő Trail-O Váltó Országos Bajnokság. Kellemes napsütéses, szinte már nyárra emlékeztető időben az impozáns Károlyi Kastély parkjában rendeztük meg az idei Trail-O Országos Bajnokságok második számát a váltót. A Denevér Egyesület három bajnoki címet, két második és két harmadik helyezéssel térhetett haza.

Eredmények:

Para kategóriában vegyes váltóban Bajnoki címet szerzett Hamvai Balázs, csapattársa a Debreceni Nagyerdők Sportegyesület versenyzője, Gabnai Nóra.

U18 Open kategóriában Bajnoki címet szerzett Surányi Kinga és Jászai Tamás alkotta csapat, második helyen végzett Nagy Dóra és Nagy Zsófia, harmadik helyezett Hamvai Máté és Prekopcsák Gábor.

U14 Open kategóriában Bajnoki címet szerzett Kalán Zsuzsanna Boglárka és Jászai Enikő, második helyezést ért el Prekopcsák Réka és Hibján Eszter, harmadik helyezettek Serfőző Bálint és Prekopcsák Balázs.

Ezen a hétvégén, azonos helyszínen rendezték meg a Heliktit Kupa Országos Rangsoroló versenyt, ahol szintén több dobogós helyezést is begyűjtöttek a Denevér Egyesület versenyzői. Első futamban egy újdonságnak számító városi középtávú versenyszámban, második futamban egy erőt próbáló sok szintkülönbséggel és sziklával nehezített normáltávon mérhették össze tudásukat a versenyzők.

Eredmények:
F16B: 1. Jászai Tamás, 2. Hamvai Máté, 3. Prekopcsák Gábor
F12C: 2. Serfőző Bálint
N16B: 2. Surányi Kinga, 3. Kalán Zsuzsanna Boglárka
N18B: 3. Nagy Dóra
Nyílt Kezdő kategóriában: 2. Hamvai László és Hibján Tamás

A csapatok versenyében a Diósgyőri Tájfutó Club és a Románából érkező Maratin Egyesület mögött a Denevér Egyesület szerezte meg a harmadik helyett.


          Sad experience with Debian on laptop...   

KDE Project:

Until a few weeks ago, I had Kubuntu running on my Acer Aspire 5630 laptop (as described here), and was more or less satisfied. It looked great, hardware support was satisfying, but I was missing the incremental package upgrades that I was used to on Debian (so that things break one small piece at a time, not everything at the same time when you do an upgrade). When, after upgrading to gutsy, the laptop would lock up every few minutes for a minute or so, I thought it was a Kubuntu problem and took it as the reason to setup Debian instead. BIG MISTAKE!!!!

After I had Debian installed, I realized how bad Debian's Laptop support really is:

  • KNetworkManager would not work with any WPA-encrypted WLAN networks (I can only connect to unencrypted networks); So after booting, I now need to run wpa_supplicant manually as root with the proper settings...
  • The ACPI DSDT in the BIOS is broken on this laptop, so suspend and hibernate won't work. In Kubuntu, I could simply fix the DSDT.aml and put it into the initrd, where the kernel picked it up. Infortunately, Debian developers decided not to include that patch, so I can't replace the DSDT with the fixed one in the stock kernel. The patch is also not upstream, as described on the ACPI page, because the kernel devs feel that inter alia "If Windows can handle unmodified firmware, Linux should too.". I think so too, but currently that's simply wishful thinking and does not have a bit to do with reality!!! I have yet to see one laptop where ACPI simply works out of the box in Linux! As a consequence, it seems that I will need to patch and compile the kernel myself for every new kernel upgrade (and of course also the packages for the additional kernel modules to satisfy the dependencies!)! The kernel devs again argue that "If somebody is unable to rebuild the kernel, then it is hard to argue that they have any business running modified platform firmware." Again, I agree, but just because **I AM** able to compile a kernel, does not mean that I should be forced to compile every kernel myself that I ever want to use!
  • The Debian kernel also does not include the acerhk module, which is needed to support the additional hot keys on the laptop

So, in short, I now have a laptop without properly working WLAN, no suspend and hibernate, and no support for the additional multimedia keys. Wait, what were my reasons to buy a laptop? Right, I wanted it for mobile usage, where I'm connected via WLAN, and simply open it, work two minutes and suspend it again...

I'm now starting to understand why some people say that Linux is not ready for the masses yet. If you are using Debian, it really is not ready, while with Kubuntu, all these things worked just fine out of the box (after I simply fixed the DSDT).

If having to recompile your own kernel every time it is upgraded is the price to pay for running Debian, I'm more than happy to switch back to KUbuntu again (which will cost me another weekend, which I simply don't have right now). The KUbuntu people seem to have understood that good hardware support is way more important than following strict principles (since the kernel devs don't include the dsdt patch, the Debian people also won't include it, simply because it's not in upstream... On the other hand, they are more than happy to patch KDE by self-tailored patches and cause bugs by these patches!!!).


          Comment on Ubuntu One by vidmate   
It's difficult to find educated people on this subject, but you seem like you know what you're talking about! Thanks
          Comment on Ubuntu One by happy fourth of july   
Hey there, You have done a great job. I'll definitely digg it and personally suggest to my friends. I'm confident they'll be benefited from this site.
          Comment on Ubuntu One by antivirus software for macs   
I enjoy what you guys are usually uup too. This type of clever work andd coverage! Keep up the wonderful works guys I've added you guys to my personal blogroll.
          Comment on Lamp, Mamp and Wamp by Andy   
Pretty good instructions for Linux. I installed on Ubuntu 14.04 and there were 2 differences from your instructions: 1. The root directory for putting the test.php file is /var/www/html/ 2. The mySQL install already asked me to set a password, so the mysql -u root command failed. I skipped this step. Thanks 28.10.2015
          Comment on Ubuntu One by comment   
Apart from quick processing , independence of service and limitations to what you may can borrow , what exactly are a number of the other advantages that you could look forward to from an online payday loan. You are never asked to go through any credit checks. We strive to provide some of the lowest fees in the industry.
          Comment on Ubuntu One by Top cpanel reseller web hosting in Mayang Imphal   
It is essential to supply customers having a summary page of these orders to allow them to review all of the details before making the purchase. Internet nnot merely adds towards the luxury and comforts in your lifetime but also provides an ample space for earning your livelihood too. In addition, their reseller plans have site builder software, 32 self installing PHP scripts, FTP accounts, and My - SQL databases. This is done through the use of reseller website hosting. There are a level of very good factors why individuals are getting associated with running their oown companies online. If you are seeking cheap and reliable internet hosting, VPS is for you. The Web hosting India service commences with buying your own personal Uniform Resource Locator (URL). You don't really must do a much more work to earn that additional income. A site is an individual with the most widespread advertising and marketing instruments utilised by around date entrepreneurs in buy to succeed in and seize a wider purchaser base for solutions and services. But in case you hqve signed up totally free service these ads could be annoying sometimes.
          Comment on Ubuntu One by Amazing Selling Machine bonus   
Greate post. Keep posting such kind of information on your page. Im really impressed by your blog. Hi there, You have done an excellent job. I will certainly digg it and for my part recommend to my friends. I am confident they'll be benefited from this web site.
          Comment on Ubuntu One by Http://Chelseloera.Wordpress.Com   
What's up it's me, I am also visiting this website daily, this web site is in fact nice and the visitors are genuinely sharing nice thoughts.
          Lubuntu 15.04 Beta 1   
We’re preparing Lubuntu 15.04, Vivid Vervet, for distribution in April 2015. With this Beta pre-release, we are now at the stage of being semi stable. However pre-releases are not suitable for a production environment. Note: this is an beta pre-release. Lubuntu pre-releases are NOT recommended for: regular users who are not aware of pre-release issues […]
          Lubuntu 14.04.2 available   
Lubuntu developers are proud to announce that (with a delay of two weeks) version 14.04.2 of the fast and lightweight operating system is now available for download via this link. Release Manager Walter Lapchynski explains: we had to delay the release for two weeks because of problems with X meta-packages which caused us numerous re-spins. […]
          The blue Unicorn set free!   
After the success[1] of their first Long-Term-Support (LTS) version in April this year, the Head of the Developer Team, Julien Lavergne, has finished work on the Utopic Unicorn which can now be downloaded at https://help.ubuntu.com/community/Lubuntu/GetLubuntu. Acting Release Manager, Walter Lapchynski, shortly after the release: “This cycle we mainly focused on fixing known bugs. But”, he […]
          Lubuntu 14.10 Utopic Unicorn Final Beta   
Testing has begun for the Final Beta (Beta 2) of Lubuntu 14.10 Utopic Unicorn. Head on over to the ISO tracker to download images, view testcases, and report results. If you’re new to testing, anyone can join and they don’t have to be Linux Jedis or anything. You can find all the information you need […]
          A not so smart TV and (of course) more Priyo   
Christmas came and went and, because I sent cards this year, and sent them a couple of weeks early, I got more cards than usual.  I also got something I really wanted and most definitely didn’t need, but it was a perfect gift choice because it came from me.  I have had a TV-viewing set up for several years which consisted of my TV hooked up to an Apple Mini computer through which I did all my viewing by streaming the shows I liked via file-sharing sites.  This enabled me to get rid of cable (I will NOT pay to watch ten or more minutes of ads every half hour!)  It also allowed me to watch series from other English-speaking countries - Canada, New Zealand, Ireland, Australia and Britain all have some great shows.  But it was a clumsy set up - lots of wires because I also needed to attach a wi-fi receiver to catch the signal from my router upstairs and I needed a wireless keyboard and mouse.  I asked around about “smart” TVs and was informed that I could browse the web using only the TV itself - the internet access was built in.  I looked at Consumer Reports and their “best buy” was a Samsung, which was about a thousand dollars less than all the other brands rated as highly.  I went to the store and for the first time ever in my history with this store I got some incorrect info from a clerk who was, I assume, a temporary Christmas hire.   Consumer Reports was pretty happy with all the rated aspects of this TV - picture quality and every measure thereof. 

So I happily made my purchase.  The delivery/set up men were great.  The picture was everything CR had promised.  I got even more local antenna channels than I had with the old set (I DO watch a half hour of news each night, gritting my teeth through the ads).  So there I sat in bliss watching a much larger, much better picture - so clear and bright that I almost didn’t notice that most of what I was watching was ads.  I gave my old set-up - TV, wifi receiver, Apple Mini and the lot - to my favorite nephew Sebastian, who was delighted to receive them.  Sebastian is acquiring the necessities for setting up house; he is looking to buy his first house and get out of the family domicile where his mother and older brother are driving him slowly nuts.  After a day or so, during which Sebastian took off for Pennsylvania to spend a late Christmas with his sister and father and their entourages, I began to seriously learn the pleasures of surfing the net with my new smart web browser. 

I found the navigation quite difficult because, of course, searches required using a pop-up keyboard displayed on the screen and scrolling was only possible by repeated use of the up/down/left/right arrow and select buttons on the remote.  This wasn’t wholly surprising, although I did seem to have more difficulty than could be explained solely by the clumsy remote requirements.  Often searches didn’t seem to take, and often I couldn’t get to the content I wanted because ad screens kept popping up.  I figured a keyboard would solve most of the problems - with the faster, easier data entry and a built in trackpad for scrolling, I would find it far easier to solve any other problems and learn the vagaries of this set.  I got a Samsung keyboard specifically designed for smart TV web browsing.

I soon discovered that my initial impression that I couldn’t get anything done was the correct one.  Having the faster text entry via keyboard just meant failing to achieve my object more quickly and more frequently within a given period of time.  The set has a built in web browser based on the ubuntu type language (I know; I never heard of it either!) which is absolutely worthless.  It will not permit adding ad blocking apps.  It will also not permit downloading a better browser (e.g. Chrome, Firefox) which do allow ad blocking and blacklisting of risky sites.  Sebastian offered to return my Mini (an offer which I shame-facedly accepted) but he is still disporting himself in the fleshpots of southern Pennsylvania for another day or two.  Several links are built into the Samsung browser (when I went on a website that evaluates and tells you which browser you are currently using, it told me I was using “Samsung Browser 1”, which I will abbreviate as SB1).  These links are for popular sites - mostly pay sites like VUDU, HBO, Netflix - but one link is for Youtube.  I decided, pending Sebastian’s gracious return of my computer, to content myself with watching some Youtube fare.  But I discovered that the way the browser from Hell displayed these sites was different from all others, which I have previously found always to be identically displayed on every browser I have used before now - Microsoft’s IE, Apple’s Safari, Chrome, Firefox.   SB1’s format showed less variety of suggestions in the first screen as well as fewer links, and it was next to impossible to get beyond those few choices.  Screens - both the selction screens and those within the Youtube videos themselves, seemed to be larger than the display area - so that there were overflow bits (tops of heads and so forth) not visible.  The picture sizing keys, netted me a pop up that said ‘not available’.  Pressing the select key often started a video entirely different from that which I thought I had chosen.  My home page, which is Yahoo, also was displayed differently and in a way that required scrolling to see headlines of articles - I could only see one at a time.

When I first enter the browser it shows an initial screen of ‘Featured’ links (read “paid for”).  One of these is Google.  When I click on the Google link, the words ‘google.com’ appear in the URL bar; there is a long wait and then a screen displays that says this website is not available.  When I clear this and actually type in ‘www.google.com”, and enter it, it reverts to ‘google.com’ and I get the same result; not always but for about three out of four tries. 

As a side observation there were two problems with the keyboard setup in itself.  First, the keyboard was defective and transposed the values for double quote and ‘@‘.  Imagine having to remember to press shift-quote when you wanted to enter a URL name with the ever-needed ‘@‘.  Secondly (and I later saw this mentioned in a how-to video for setting up a keyboard with the Samsung), the bluetooth receptor is behind the screen which causes enough interference so that in using the trackpad, the scrolling function is jittery and often stops for a bit before you reach the position to which you are scrolling. 

Internet rumors also have it that Samsung not only does not allow ad blocking, but in fact has itself embedded ads in its browser.  I tend to believe it - I cannot get anything done because of the many ad screens that show up when I press ‘play’ on the video I want, on those rare occasions when I can get to said video in the first place. 

I do not know if Samsung is unique in the utter uselessness of its web browser; I suspect not.  I do think Consumer Reports needs to include evaluation on the ’smart’ aspect as well as the ‘TV’ aspect of its evaluation of this type of TV.  I have ended up with a setup that I could have gotten for several hundred dollars less - a big dumb TV being used as a video monitor for my trusty old Apple Mini.  If anyone out there is considering a smart TV, I urge him or her to find one already set up where he can evaluate its usability for any internet related activity by actually trying the functions he plans to use.  There is NOTHING that is reliably doable on the one I have. 

On to happier notes: Priyo’s term at the police academy lasted eleven months; he graduated second in his class, which pissed him off mightily; at half term he had been number one (and, had he remained at that rank, he would have had his picture in the local paper as well as receiving a big trophy for which he had lusted all term).  He is now the equivalent of a state trooper or highway patrolman, rather than a city cop.  In fact he has been to several accidents already, as well as a post mortem.  He has been offered - and refused - his first bribe by a cyclist driving without a license.  He was also involved in a 3 a.m. raid to pick up a wanted man.  I asked him if he scared the family and he told me, “No; they scared us!” 

So many men had been accepted into the academy that they had to divide the class and send the second half for training in Assam.  This latter group will not finish until early February.  Thus, until the twelfth of February, Priyo will be assigned to his local district, south of Imphal, rather than his expected assignment in Senapati north of Imphal, so that he is working, but not accumulating seniority at his official posting.  They won’t start Priyo’s lot earlier than the Assam lot in order to avoid whining down the road about unfair seniority if a promotion is given to one of the earlier finishers.  He has also discovered that until July he will be stationed in Imphal (with a month in nearby Thoubal) for on-the-job training in various types of tasks before being located in his officially assigned district of Senapati.  He is currently using his rare days off to find a nice place in Imphal for us to stay.  He has one possibility in view, though the landlords are strict Hindus and we will not be able to eat beef or pork there.  I asked how they would know, and he said, “By the smell.”  Christians and Muslims are not the only sects to meddle in their neighbors’ business or to be more acutely conscious of others’ diversity than they are of their own virtue (which is generally taken for granted!). 

As hoped now, I will head for Imphal around Feb 15, give or take.  First I want to be sure there is an apartment WITH WINDOWS.  Priyo and I are as one on the topic of what a cesspit our last place was.  As soon as he gives me the go-ahead date I will look for the earliest flight that is affordable.  I will probably plan to stay until the end of June if it continues to appear that Priyo will be training in Imphal until July.  It kills me to miss spring here in Reedville, both because it is the garden planting season and because it is so lovely and serene to be here during that time of early blooming, lilacs, and the greening of everything.  Imphal is anything but serene!  But it is clear to me that the length of time that Priyo and I have been apart is detrimental to our happiness.  It is not that we are less committed or care less or anything like that; it is that with him being at the academy and now on a new career, and me being in a land he has never seen, our daily conversations are limited to events past or to describing people and places that one or the other of us have never known, or to news topics when most news is on subjects of which the other has no knowledge.   We both know that there are going to be at least 6 month stretches apart in future, but a year is just too long. 

One of the great problems with dating when one is older is that it is easy to get into the mindset that one has achieved a fairly satisfactory life and that the other person must fit into an established schedule and set of relationships and interests.  Nobody is out there looking to change everything - most are not open to changing anything.  But the fact is that when you are really in love, all those ‘musts’ seem like nothing; the problem is that you fall in love after you meet someone which can be precluded by screening out 'imperfect' choices at the get-go.  Priyo makes me very happy and that is not worth cutting short for a pretty spring or pretty garden alone here in NY.  I hasten to say he could be labelled an imperfect choice only by the fact of our distance from each other. 

So we shall see what is next!

          Give us mo' momo   
Katmandu Momo serves up crave-worthy Nepalese dumplings.

Since Saroja Shrestha and her husband, Kyler Nordeck, started the Katmandu Momo food truck in 2014, we've been following it around Little Rock and on social media in pursuit of our momo fix. What's a momo, you ask? It's a type of steamed South Asian dumpling, popular in Nepal, Tibet, Bhutan and parts of India, and Katmandu Momo's version is addictive.

Shrestha grew up in Katmandu, Nepal, and first came to the U.S. to attend Henderson State University in Arkadelphia. She stuck around Arkansas after college and met and married Nordeck. Shrestha enjoyed cooking, got acclaim from friends when she made momos and had a business administration degree, so she and Nordeck decided to open a food truck. After three years of success as a mobile eatery, they opened up shop in a corner stall of the River Market's Ottenheimer Hall earlier this year. The food truck remains in operation around town, but, man, oh man, are we glad to have a fixed location to get our fix of Nepalese deliciousness.

For those new to Katmandu Momo's cuisine, you're in luck: The options are few and all tasty. The steamed momos come filled with beef, chicken or veggies. They're roundish, creased together with a swirl on top (Shrestha assembles each momo). All the fillings are marinated in spices that may be somewhat familiar — cumin, coriander, turmeric — along with fresh garlic and ginger, but together, taste unlike anything we've tried before. They come with achar sauce, which is thin and tomato-based with hints of sesame oil and a slow heat. Dump the momos thoroughly in the achar sauce and lean all the way over your to-go container — the momos are juicy and, if you're not careful, you'll get splattered. We've had all varieties many times. They're all excellent, but we prefer the crunch of the veggie, which have a stewed-like quality, and apparently we're not alone. Nordeck says it's hard for Shrestha to make enough veggie to satisfy the demand.

The veggie momos are also vegan, as are all three of the sides. The long-grain jasmine fried rice was buttery and golden (we suspect there's a healthy seasoning of turmeric, saffron or both), with a wonderfully uneven toasted quality. It was like paella rice, but without the crunch. If the aloo dum, or spicy potato salad, was on a chain restaurant menu, it'd have a little hot pepper symbol next to it to warn the capsaicin-averse. It's spiced liberally with fennel, cumin, green onion and cilantro, and enough heat to make the crunchy, mild spring roll a nice go-between.

You can get large portions of each of the sides for $4, or get them as part of a combo. It's $8.99 for 10 momos, eight momos and a side or six momos and two sides.

Katmandu Momo
Ottenheimer Hall
400 President Clinton Ave.
351-4169
facebook.com/katmandumomo

Quick bite

Katmandu Momo's regular special is chicken chow mein, a smoky tangle of spaghetti mixed with blackened pieces of chicken and green pepper and covered with garam masala and other spices. It's a massive portion and, like everything else, delicious. You can get a veggie variety, too.

Hours

10:30 a.m. to 6 p.m. Monday through Saturday.

Other info

Credit cards accepted, no alcohol.



          Asp.net Core, Docker e Docker Swarm   

A me sembra passata un'eternità. Intorno al 2004 si poteva utilizzare codice scritto per il Framework.Net su Linux con Mono. Da appassionato da moltissimi anni anche del mondo linux, la cosa mi era sembrata fin da subito interessante (alcuni punti li avevo trattati anche su questo mio blog) e mai avrei credito che di punto in bianco la stessa Microsoft si decidesse un giorno di supportare pienamente un suo diretto concorrente quando iniziò lo sviluppo di asp.net Core.

Lo sviluppo di applicazioni web su linux è ora più facile che mai e in rete si trovano parecchie guide che mostrano quanto sia alla portata del developer medio (ricordo che anche con mono e linux era possibile sviluppare applicazioni web con asp.net, ma personalmente trovavo la cosa come una dimostrazione delle potenzialità di mono e nulla da utilizzare realmente in produzione: mai mi sarei sognato di sviluppare web app in asp.net su linux). Un'ottima guida, in fase di miglioramento, riguardante la configurazione di asp-net core su Linux, la si può trovare in questo post di Pietro Libro.

Quest'apertura di Microsoft verso altri mondi non si è fermata solo a Linux, ma anche al mondo Apple, e, non accontentandosi, non si è fermata dinanzi neanche a novità come il mondo dei container dove Docker fa ormai da padrone, e ufficialmente rilascia immagini anche per questo mondo. Il suo uso è semplice, avviato Linux possiamo prendere una nostra soluzione appena realizzata con Visual Studio ed avviarla senza installare alcunché (se non lo stesso Docker, naturalmente). Esempio, scaricato in una directory il progetto, basta avviare docker in questo modo:

docker run -it --name azdotnet -v /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2:/app -p 5000:5000 microsoft/dotnet:1.0.1-sdk-projectjson

Scaricate le immagini indispensabili all'avvio, viene restituito il prompt all'interno del container di docker, dove possiamo scrivere:

dotnet restore dotnet run

Ecco che vengono scaricate le dipendenze e viene compilato il progetto. Alla fine:

Project MVC5ForLinuxTest (.NETCoreApp,Version=v1.0) will be compiled because the version or bitness of the CLI changed since the last build Compiling MVC5ForLinuxTest for .NETCoreApp,Version=v1.0 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:02.8545851 Hosting environment: Production Content root path: /dotnet2/src/MVC5ForLinuxTest Now listening on: http://0.0.0.0:5000 Application started. Press Ctrl+C to shut down.

Dal browser:

Prima di proseguire due parole sui parametri utilizzati con docker. Il parametro -it serve per connettere il terminale attuale al contenuto del container. Avremmo potuto anche utilizzare:

docker run -d ...

Dove d sta per detach, in questo modo non avremo visto nulla sul terminale e saremmo rimasti nella shell di linux attuale. Di contro non avremmo visto l'output della compilazione e non avremmo potuto inviare i comandi di compilazione immediatamente. E' sempre possibile riconnettersi ad un container avviato in detach mode per controllare cosa sta succedendo, per esempio:

docker logs azdotnet

Questo comando visualizza il contenuto del terminale all'interno del container (aggiungendo il parametro -f il comando non ritornerebbe al prompt ma continuerebbe a rimanere in attesa di nuovi messaggi). Infine ci saremmo potuti riconnettere con il comando:

docker attach azdotnet

Il parametro -v serve per gestire i volumi all'interno dei container di docker. Riassumendo, in docker si possono definire due tipi di volumi, il primo, quello più semplice e utilizzato nel mio esempio, permette di creare un collegamento all'interno del container con il disco del compute host che ospita la sessione di docker. Nel mio esempio ci sono due path divisi da ":", a sinistra c'è la prima parte del path sul disco dell'host, a destra il path all'interno del container. Nell'esempio: /home/az/Documents/docker/aspnetcorelinux/src/MVC5ForLinuxTest2 è il percorso nell'host che sarà mappato come app all'interno di docker. Solo per completezza, il secondo tipo di volume è quello gestito internamente da docker: eventuali volumi montati in questo modo saranno gestiti in un suo path privato all'interno di docker; quest'ultimo tipo è comodo per condividere tra più container directory e file inseriti in altri container.

Il parametro -p viene utilizzato per definire quali porte docker deve aprire nel container verso l'host; così come il parametro per i volumi, anche questo accetta due valori suddivisi dal caratteri dei due punti in cui il valore a destra definisce quale porta sarà mappata nel container e a quale sarà mappata nell'host (parte sinistra del parametro).

Infine dobbiamo specificare il nome dell'immagine che dobbiamo utilizzare, essendo questa salvata nell'hub ufficiale di docker, possiamo definirla semplicemente con microsoft/dotnet; se fosse su un altro servizio di immagini di docker avremmo dovuto scrivere il path completo di dominio: miohost.com/docker/hub/microsoft/dotnet. Il parametro dopo i due punti è il tag che specifica quale versione vogiamo utilizzare. In questo esempio usiamo una versione specifica; avremmo potuto usare anche latest per avere l'ultima versione, ma nella pratica reale sconsiglio questa procedura perché, come mi è accaduto più volte, con il passaggio di versioni, si possono riscontrare anomalie che obbligano a mettere mano al tutto. In un primo test che avevo fatto, avevo specificato come tag latest, per poi scoprire, quando era uscita la versione 1.1 di asp.net core, che il progetto non era più compilabile per le differenze di versioni nelle dipendenze. Un altro caso che mi è successo di recente: utilizzando di base un'immagine ubuntu, precedentemente con la versione 14.04.4 era presente nell'immagine un comando per la decompressione di una formato particolare, comando che nell'ultima versione, la 16.04 è stato eliminato; al passaggio a quest'ultima versione di ubuntu il tutto si bloccava con un messaggio inizialmente incomprensibile che poi era basato sulla mancanza di quel comando.

Abbiamo usato spesso come valore nel parametro azdotnet, e questo è il nome che abbiamo dato al nostro container grazie al parametro --name: non assegnandolo noi da comando, docker avrebbe creato uno suo. Se siamo ancora nel terminale connesso a docker, possiamo uscirne con la sequenza Ctrl+P Ctrl+Q. Usando il comando docker ps possiamo vedere informazioni sui container che girano all'interno della nostra macchina:

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES df65019f69a5 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 15 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp azdotnet

Se non avessi specificato il nome mi sarei ritrovato questo casuale goofy_shaw:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 04c3276cfd73 microsoft/dotnet:1.0.1-sdk-projectjson "/bin/bash" 10 seconds ago Up 7 seconds 0.0.0.0:5000->5000/tcp goofy_shaw

Vogliamo fermare il processo in questo container (in entrambi i casi)?

docker stop azdotnet ... oppure ... docker stop goofy_shaw

Voglio cancellare il container e l'immagine?

docker rm azdotnet docker rmi azdotnet

Voglio essere distruttivo e cancellare qualsiasi costa presente in docker?

docker rm $(docker ps -qa) docker rmi $(docker images -qa)

La comodità non si ferma qui. Scopro che c'è un piccolo errore? Avendo il volume montato in locale posso fare (se ho l'editor VSCode di Microsoft, ma qualunque editor di testo fa la stessa cosa):

code .

Quindi, salvato il file, posso ricompilare e riavviare, all'interno del terminal in docker, dopo aver fermato con Ctrl+C

dotnet restore dotnet run

E vedere le modifiche. Potremmo creare anche un'immagine già pronta con la nostra app al suo interno. Il modo più semplice è scaricare quella di nostro interesse, salvarci all'interno i file della nostra app, quindi fare il commit delle modifiche e utilizzarla tutte le volte che vogliamo. Oppure, possiamo introdurre nel codice sorgente della nostra web app un file per la creazione automatica dell'immagine per docker. Ecco un esempio che sarà ripreso anche più tardi:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh CMD ["./start.sh"]

Con questo file possiamo creare una immagine con questo comando:

docker build -t myexample/exampledotnet .

myexample/exampledotnet sarà il nome dell'immagine che possiamo usare per richiamare e avviare un container con il suo contenuto. Se provassimo ad avviare questo comando, vedremo che docker scarica, se non già presente, l'immagine di base per il dotnet, quindi, dopo la riga di informazione sul maintainer, verranno copiati i file locali dalla directory ./src/MVC5ForLinuxTest2/ nell'immagine e nel path /app. Lo stesso per il file start.sh. Quindi viene dato il flag di avvio a questo file e quando l'immagine sarà avviata, sarà eseguito proprio questo file. Il suo contenuto? E' questo:

#!/bin/sh cd app dotnet restore dotnet run

Naturalmente avremmo potuto creare già un'immagine compilata, ma questo caso ci aiuterà a comprendere un passaggio importante che vedremo più avanti.

La creazione di immagini non è lo scopo di questo post, quindi vado avanti anche perché la documentazione ufficiale è chiara a riguardo. Voglio proseguire perché il resto è una situazione che si sta evolvendo proprio in questo periodo che sto scrivendo questo post. Avevo trattato nei miei post precedenti il mondo dei micro service e sulla possibilità di distribuirli su più macchine. Proprio nell'ultimo post, prendevo in considerazione i vantaggi di Consul per il discovery dei servizi e altro. In quel periodo mi ero anche interessato sulla possibilità che anche docker potesse fare questo. Con la versione 1.1x, scopro che docker mette a disposizione in modo nativo la possibilità di un cluster di host dove saranno ospitati i vari container. Docker swarm mette a disposizione gli strumenti per fare questo ma solo dalla versione 1.12 il tutto è stato semplificato. Nelle versioni precedenti, dal mio punto di vista, era il delirio: nelle macchine che dovevano gestire il tutto dovevano essere collegate con Consul. Inoltre la configurazione di tutto era macchinosa e, per mie prove dirette, bastava un errore da niente nella configurazione per mandare all'aria il tutto - lo so, la colpa non è in docker ma è mia. Dalla versione 1.12 tutto è diventato banale anche se, lo dico subito, ci si ritrova con un bug subdolo che descriverò tra poco. Innanzitutto, cos'è Docker swarm? Non è nient'altro che la gestione di docker un cluster. Quello che abbiamo fatto prima su una singola macchina con i comandi di base di docker, con Docker swarm lo possiamo fare su più macchine senza preoccuparci di come configurare e il tutto. Tutto è semplice? Sì, eccome! Gli sviluppatori di Docker hanno creato un progetto interessante e veramente sbalorditivo per quello che promette e mantiene (al netto dei bug). Cominciando dall'inizio, cosa dobbiamo avere per mettere in piedi un cluster con Docker swarm? Una o più macchine che gestiranno tutti gli host collegati (nella documentazione ufficiale consigliano N/2+1 dove N è il numero di macchine ma scegliate oltre le sette macchine) definiti manager. Si può provare il tutto anche con macchine virtuali come ho fatto io per l'esempio di questo post ed è pressoché obbligatorio Linux (una qualsiasi distribuzione va bene, sconsigliate macchine Apple e Windows). Nel mio caso avevo due macchine con questi due ip:

192.168.0.15 192.168.0.16

La macchina che termina con 15 sarà il manager e la 16 la worker (di base anche la macchina manager sarà utilizzata per l'installazione dei container, anche se è possibile con un comando fare in modo che essa faccia solo da manager). Su questa macchina, da terminale, avviamo il tutto:

docker swarm init --advertise-addr 192.168.0.15

Se tutto funziona alla perfezione, la risposta sarà:

ReplY: swarm initialized: current node (83f6hk7nraat4ikews3tm9dgm) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Più chiaro di così non si può. La rete swarm è stata creata e possiamo aggiungere macchine manager o worker come descritto nel testo. Sulla macchina 16, dunque, per collegarla, scriviamo:

docker swarm join --token SWMTKN-1-0hmjrpgm14364r2kl2rkaxtm9tyy33217ew01yidn3l4qu3vaq-8e54w2m4mrwcljzbn9z2yzxrz 192.168.0.15:2377

Se la rete è configurata correttamente e se le porte necessarie non sono bloccate da firewall (verificare dalla documentazione di Docker swarm, a memoria non le ricordo), la risposta che avremo sarà:

This node joined a swarm as a worker.

Ora dalla manager, la 15, vediamo se è vero:

docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 83f6hk7nraat4ikews3tm9dgm * osboxes1 Ready Active Leader 897zy6vpbxzrvaif7sfq2rhe0 osboxes2 Ready Active

Vogliamo maggiori info tecniche?

docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 2 Server Version: 1.13.0-rc2 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 61 Dirperm1 Supported: true ... Kernel Version: 4.8.0-28-generic Operating System: Ubuntu 16.10 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.445 GiB ...

Perfetto. Ora possiamo installare i nostri container e distribuirli in cluster. Per questo esempio ho creato una semplice web application con una web API che ritorna un codice univoco per ogni istanza. Il codice sorgete è pubblico e disponibile a questo url:
https://bitbucket.org/sbraer/aspnetcorelinux

Iscrivendosi gratuitamente in docker, potremo creare le nostre immagini, inoltre è stata introdotta da poco tempo la possibilità di eseguire build automatiche dai nostri dockerfile salvati in github o butbutcket. Ecco il link per l'esempio di questo post:
https://hub.docker.com/r/sbraer/aspnetcorelinux/

La comodità è che posso modificare il codice sorgente della mia app in locale, fare il commit su bitbucket dove ho un account gratuito, e dopo pochi minuti avere una immagine in docker pronta. E proprio quello di cui abbiamo bisogno per il nostro esempio.

docker service create --replicas 1 -p 5000:5000 --name app1 sbraer/aspnetcorelinux

Il comando in docker ora è leggermente differente. Si nota subito l'aggiunta di service: questo indica a docker che vogliamo lavorare nel cluster di swarm. Il comportamento è quasi simile a quello che abbiamo visto finora ma non potremmo collegarci da terminale nel metodo conosciuto. Prima di approfondire vediamo che è successo:

docker service ls ID NAME REPLICAS IMAGE COMMAND cx0n4fmzhnry app1 0/1 sbraer/aspnetcorelinux

E' stata scaricata l'immagine e sta per essere avviata. Dopo alcuni istanti potremo richiamare l'API e vedere il risultato:

curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

E lo potremo fare da tutte le macchine presenti nel cluster. E se volessimo installare più copie di questa app?

docker service scale app1=5

Ecco fatto: ora docker creerà altre quattro container che saranno distribuiti tra le due macchine:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 6 minutes ago 1cydg0cr7re8suxluh2k7y0kc app1.2 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 51 seconds ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 44 seconds ago

Una volta avviate, vedremo che Docker swarm sarà in grado di bilanciare tutte le richieste (dal guid si può vedere la differenza di processo):

osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"6c9f1637-7990-4162-b69e-623afee378e6"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"853b6cbb-6394-4a2e-87b9-2f9a7fa2af06"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"4160a903-dc66-4660-aafc-5ec8c9549869"}]

Ora ho voglia di aggiungere una ulteriore informazione nella risposta dell'API. Decido di aggiungere anche data e ora. Creo un nuovo branch, master2, faccio la modifica, commit, creo una nuova immagine in docker con il tag dev. E se volessi aggiornare le cinque istanze che girano sulle mie macchine? Docker swarm fa tutto questo per me:

docker service update --image sbraer/aspnetcorelinux:dev app1

Ora docker fermerà una alla volta i container, lì aggiornerà e li avvierà ancora:

docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6x8j50wn9475jc70fop0ezy7s app1.1 sbraer/aspnetcorelinux osboxes1 Running Running 10 minutes ago 2g98f5qnf3tbtr83wf5yx0vcr app1.2 sbraer/aspnetcorelinux:dev osboxes2 Running Preparing 5 seconds ago 1cydg0cr7re8suxluh2k7y0kc \_ app1.2 sbraer/aspnetcorelinux osboxes2 Shutdown Shutdown 4 seconds ago dku0anrmfbscbrmxce9j7wcnn app1.3 sbraer/aspnetcorelinux osboxes2 Running Preparing 4 minutes ago 5vupi73j7jlbjmbzpmg1gsypr app1.4 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago e5a6xofjmxhcepn60xbm9ef7x app1.5 sbraer/aspnetcorelinux osboxes1 Running Running 4 minutes ago

Lasciandogli il tempo di aggiornare tutto, ecco il risultato:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"e02997ad-ea05-418f-9be4-c1a9b71bff85","dateTime":"2016-11-26T21:14:25.617665+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ff0b1dfa-42e5-4725-ab11-6fdb83488ace","dateTime":"2016-11-26T21:14:27.157971+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"0a7578fb-d7cf-4c6f-a1fa-07deb7cddbc0","dateTime":"2016-11-26T21:14:27.789131+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:28.873405+00:00"}]

Vogliamo fermare il tutto?

docker service rm app1

Interessante, inoltre, è la capacità di docker di prevenire eventuali disastri esterni. Se nel mio caso, spegnessi la macchina 16, docker se ne accorgerebbe, non invierebbe più richieste alle API presenti su quella macchina, e immediatamente avvierebbe lo stesso numero di container persi in quella macchina in altre presenti nel cluster.

E se volessi fare come nel primo esempio, vedere il log di un preciso container? Purtroppo questo non è proprio facile in docker. Innanzitutto è necessario andare sulla macchina dove è installato e scrivere:

docker ps -a

Quindi, come visto nel caso del parametro --name non assegnato, vedere l'output di questo comando per estrapolarne il nome e connettendosi poi. Non proprio comodo.

Tutto magnifico dunque... no, perché come qualcuno può avere intuito, c'è un problema di base nella gestione di cluster di immagini in docker swarm. Abbiamo visto che possiamo scalare un'immagine e automaticamente docker la installerà su quella o su altre macchine; ma, riprendendo l'esempio della nostra API, quando questa sarà disponibile dalla gestione di bilanciamento di docker? E qui nasce il problema: lui manterrà scollegata la macchina in creazione dal load balancer interno fino a quando questa sarà avviata: e il servizio, come in questo caso, è lento a partire perché scarica dipendenze e compila, che succede? Semplice, docker metterà a disposizione quelle macchine che non sapranno gestire la riposta:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo [{"guid":"ce87341b-d445-49f6-ae44-0a62a844060e","dateTime":"2016-11-26T21:14:28.303101+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ed ecco il primo problema... Come risolverlo? Per fortuna c'è una soluzione semplice. Nel Dockerfile che crea l'immagine, possiamo usare un comando apposito che rende disponibile l'immagine solo se un comando dà risposta positiva. Ecco il nuovo Dockerfile:

# Example from AZ FROM microsoft/dotnet:1.0.1-sdk-projectjson MAINTAINER "AZ" COPY ["./src/MVC5ForLinuxTest2/", "/app"] COPY ["./start.sh", "."] RUN chmod +x ./start.sh HEALTHCHECK CMD curl --fail http://localhost:5000/api/systeminfo || exit 1 CMD ["./start.sh"]

HEALTHCHECK non fa altro che dire a docker se quel container funziona correttamente, e lo fa eseguendo il comando che verifica se il servizio funziona correttamente - nel mio caso fa una richiesta all'API e se solo risponde positivamente il container viene agganciato al load balancer di docker, questo è comodo anche per verificare che la nostra API non smetta di funzionare per motivi suoi e, in questo caso, può avvertire docker del problema. Perfetto... Proviamo il tutto ed ecco l'output dopo aver aggiunto altre istanze della stessa webapp:

curl localhost:5000/api/systeminfo [{"guid":"55d63afe-0b67-47d4-a1d2-4fb0b9b83bef","dateTime":"2016-11-26T21:14:20.411148+00:00"}] osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost' osboxes@osboxes2:~$ curl localhost:5000/api/systeminfo curl: (6) Couldn't resolve host 'localhost'

Ma che succede? Purtroppo nella versione attuale, la 1.12, il controllo nel caso di cluster swarm non funziona correttamente. Facendo delle prove personali, sembra che l'healthcheck invii la richiesta a tutto il cluster che ovviamente risponderà positivamente... Purtroppo non ci sono scappatoie ma per fortuna questo bug è stato risolto con la versione 1.13 (ora in RC, ma che dovrebbe essere rilasciata entro metrà mese di dicembre). Infatti, installata la versione 1.13, questo problema scompare - ho verificato di persona. A proposito di quest'ultima versione, è stata rilasciata anche la funzionalità di fare rollback dell'immagine in automatico, ma per ora non ho potuto controllare di persona se la cosa funziona. Inoltre - ed era ora - in experimental è stato inserito il comando per la visualizzazione del log all'interno dello swarm.

E' ora di tirare qualche conclusione personale: con la versione 1.13 il tutto è così facile da far impallidire qualsiasi altra scelta per la distribuzione dei propri service; kubernetes di Google appare fin troppo complicato a riguardo, e anche la soluzione che avevo mostrato con Consul sembra molto più macchinosa. Inoltre sembra che il futuro sia nei containers (pure Microsoft in Azure sta implementando il tutto per renderne l'uso più facile possibile). Inutile dire che la procedura che ho descritto qui sopra, funziona perfettamente sia nel caso di una piccola web farm e sia che si decida di passare al mondo del cloud.

Interessante... altro che...

Tags: ,

Continua a leggere Asp.net Core, Docker e Docker Swarm.


(C) 2017 ASPItalia.com Network - All rights reserved


          snappy sensors   
Sensors are an important part of IoT. Phones, robots and drones all have a slurry of sensors. Sensor chips are everywhere, doing all kinds of jobs to help and entertain us. Modern games and game consoles can thank sensors for some wonderfully active games.

Since I became involved with sensors and wrote QtSensorGestures as part of the QtSensors team at Nokia, sensors have only gotten cheaper and more prolific.

I used Ubuntu Server, snappy, a raspberry pi 3, and the senseHAT sensor board to create a senseHAT sensors snap. Of course, this currently only runs in devmode on raspberry pi3 (and pi2 as well) .

To future proof this, I wanted to get sensor data all the way up to QtSensors, for future QML access.

I now work at Canonical. Snappy is new and still in heavy development so I did run into a few issues. First up was QFactoryLoader which finds and loads plugins, was not looking in the correct spot. For some reason, it uses $SNAP/usr/bin as it's QT_PLUGIN_PATH. I got around this for now by using a wrapper script and setting QT_PLUGIN_PATH to $SNAP/usr/lib/arm-linux-gnueabihf/qt5/plugins

Second issue was that QSensorManager could not see it's configuration file in /etc/xdg/QtProject which is not accessible to a snap. So I used the wrapper script to set up  XDG_CONFIG_DIRS as $SNAP/etc/xdg

[NOTE] I just discovered there is a part named "qt5conf" that can be used to setup Qt's env vars by using the included command qt5-launch  to run your snap's commands.

Since there is no libhybris in Ubuntu Core, I had to decide what QtSensor backend to use. I could have used sensorfw, or maybe iio-sensor-proxy but RTIMULib already worked for senseHAT. It was easier to write a QtSensors plugin that used RTIMULib, as opposed to adding it into sensorfw. iio-sensor-proxy is more for laptop like machines and lacks many sensors.
RTIMULib uses a configuration file that needs to be in a writable area, to hold additional device specific calibration data. Luckily, one of it's functions takes a directory path to look in. Since I was creating the plugin, I made it use a new variable SENSEHAT_CONFIG_DIR so I could then set that up in the wrapper script.

This also runs in confinement without devmode, but involves a simple sensors snapd interface.
One of the issues I can already see with this is that there are a myriad ways of accessing the sensors. Different kernel interfaces - iio,  sysfs, evdev, different middleware - android SensorManager/hybris, libhardware/hybris, sensorfw and others either I cannot speak of or do not know about.

Once the snap goes through a review, it will live here https://code.launchpad.net/~snappy-hwe-team/snappy-hwe-snaps/+git/sensehat, but for now, there is working code is at my sensehat repo.

Next up to snapify, the Matrix Creator sensor array! Perhaps I can use my sensorfw snap or iio-sensor-proxy snap for that.
          como actualizar python 2.7 a una 3.2 en ubuntu   

como actualizar python 2.7 a una 3.2 en ubuntu

Respuesta a como actualizar python 2.7 a una 3.2 en ubuntu

Que versión de Ubuntu estas utilizando?

Publicado el 28 de Junio del 2017 por xve

          como actualizar python 2.7 a una 3.2 en ubuntu   

como actualizar python 2.7 a una 3.2 en ubuntu

Respuesta a como actualizar python 2.7 a una 3.2 en ubuntu

Hola, no se sobre que SO trabajas pero debes saber que en todas las distros Linux tienen dependencia de la version 2.X de Python que viene incluida, si estas sobre Windows simplemente desinstala la version que tengas y descargate e instala la nueva, en Linux debes instalar o verificar si ya lo tienes instalado Python 3.X, puedes verificar la version asi:

python3 -V
Si te dice que no existe ningun paquete debes instalarlo, para las distros ...

Publicado el 28 de Junio del 2017 por kip

          como actualizar python 2.7 a una 3.2 en ubuntu   

como actualizar python 2.7 a una 3.2 en ubuntu

Hola
Estoy aprendiendo Tkinter y el problema es que los tutoriales y libros que encuentro son para versiones 3.x de python y no he podido actualizar python. Sé que es una tonteria, pero enserio que nada que puedo.
Gracias.

Publicado el 28 de Junio del 2017 por toro

          thank   
thank for all Download: Ubuntu 12.10







          Understanding the apt-cache depends Output   

I was using apt-cache in Ubuntu to get a list of dependencies for a certain package and parse the output programmatically, eventually I wanted to programatically download and package them within an archive for offline installs later on. I was not really sure about the exact meanings of the output ...


          Ubuntu 10.04 출시   
Ubuntu 10.04 LTS가 출시됐습니다. Ubuntu는 매년 4월과 10월에 6개월마다 업데이트가 되며, 최소 18개월 이상을 보안 관련 업데이트를 지원하는데, 이번 LTS(Long Term Support) 버전은 데스크탑용이 3년, 서버용은 5년 이상 지원을 받을 수 있습니다. 저는 작년 초부터 Ubuntu 9.04 NetbookRemix 버전을 HP mini 2140에 설치해서 사용중입니다. 주된 용도는 리눅스용 크롬 브라우저로 구글 서비스(이메일, 리더, 문서도구, 캘린더, 번역도구 […]
          Extreme Tux Racer 0.7.4   
One of the major games for Linux is ‘Extreme Tux Racer’, available for Android, Linux, Microsoft Windows, Macintosh operating systems and Ubuntu Touch. The goal of the game is to control Tux, or another chosen character, to get to the bottom of the hill. The character will slide down the hill of snow and ice on his belly. Along the way you can pick up herring.
          Ubuntu Mobile OS Is Coming on a Smartphone Near You in 2014   
After teasing everybody at the turn of the year with the “So close, you can almost touch it” phrase, Canonical …
          Upgrading the Media Server   
This weekend I took the time to upgrade my home server to Ubuntu’s Jaunty release, I’m happy to say things went really well overall.  The only problem I had was related to the way I share my media with the Mac devices in my house.  While samba works ok for a lot of stuff, and […]
          Pushbullet - PC↔안드로이드↔iOS 간 파일/링크/메모 전송 종결자!   

지금은 대부분의 사람들이 스마트폰을 가지고 있는 스마트한 시대라고는 하지만, 의외로 PC에서 스마트폰으로 파일을 전송할때 어떻게 보낼지 고민하게 됩니다. 저도 컴퓨터에 있는 파일을, 그리고 보고있는 웹사이트를 안드로이드폰이나, 아이패드에 어떻게 더 쉽게 전송할 수 있지 않을까 고민했습니다.

이전의 전송방법 VS Pushbullet

살짝 머리아파지는 파일전송을 예를 들어서 이야기 해보죠. 이전에는 PC에서 모바일로 무선으로 파일전송을 하려면..

웹브라우저를 켜서 메일 사이트에 들어간다 → 내 메일주소로 파일을 첨부해서 메일을 보낸다 → 스마트폰에서 메일앱을 켠다 → 첨부파일을 다운받는다

내게 쓴 편지함다들 메일이 수북하게 쌓여있는 '내게 쓴 편지함' 하나정돈 가지고 계시죠?(...)

스마트한 시대에 이런 건 너무 복잡하지않나요? Pushbullet을 사용하면 이렇게 바뀝니다.

보내려는 파일에 오른쪽 클릭 → Pushbullet 메뉴에서 파일을 보낼 기기 선택 → 스마트폰에서 푸시알람이 뜨면 다운로드

Pushbullet 메뉴꽤 간편해 보이죠?

위와같이 Pushbullet을 사용하면 브라우저를 켠다던지, 스마트폰에서 메일앱을 켠다던지 하는 일이 없으니 동선이 확 줄어들죠

거의 모든 플랫폼을 지원

공식적으로 Pushbullet이 지원하는 플랫폼은 다음과 같습니다.

물론 '비공식적으로'도 있겠죠? 서드파티 앱으로 다른 플랫폼용도 개발되어 있습니다.

PC에서 iPad로 파일전송

PC와 아이패드에 앱을 설치하고 각각 로그인합니다. 이때 로그인은 구글계정으로 하게됩니다
PC와 아이패드에 앱을 설치하고 각각 로그인합니다. 이때 로그인은 구글계정으로 하게됩니다
참고로, 파일전송을 하려면 PC앱은 실행이 되어있어야 합니다
PC에 있는 파일을 이렇게 전송합니다
아이패드에서 푸시알람을 누르고
다른 앱으로 열어서 사용합니다

브라우저에서 안드로이드폰으로 링크전송

브라우저와 안드로이드폰에 Pushbullet을 깔고, 각각 로그인합니다
브라우저와 안드로이드폰에 Pushbullet을 깔고, 각각 로그인합니다
브라우저에서 보고있는 현재 링크를 Pushbullet 버튼을 눌러서 전송합니다
안드로이드폰에서 링크가 푸시알림으로 옵니다

기타 기능들

이 글에서는 기기간 전송기능만 썼는데요, 아래의 기능들도 있습니다.

저는 전송기능 하나만으로도 정말 훌륭한 앱이라고 생각하는데, 기능이 많이 추가되고 있네요. 처음엔 글이랑 링크만 보낼 수 있었는데, 파일도 보낼수 있게 되서 많이 발전했죠.
지금도 발전가능성이 많이 보이는 앱이고, 사용하기에도 간편해서 주변사람들에게도 많이 추천하는 앱입니다 :)

덧) 파일전송 제한은 25메가 이니, 큰 파일은 유선이나 대용량첨부로 해야합니다ㅠ


          By: Baz   
quixote - Symantec. Yes, well, um, moving on... Virus writers use everything in their power to make it as hard as possible to remove viruses. Not least because many viruses try to kill off competitors so they have the infection all to themselves. This meddling is one of the things that causes problems on infected PCs. Ubuntu - good idea. My old ISP had a monitor system in place and I could only send e-mails at a 'manual' rate. This was a minor nuisance since once a month I had a mail-out of an opted-in newsletter which needed to be sent individually. If I'd been sending as a group and letting the ISP do the distribution, I could have sent thousands out in the one e-mail. Unfortunately, that ISP had a terrible attitude of 'now we've hooked you with a reasonable cost we can up the charges' and even tried to block my new ISP from taking over my BT line to stop me leaving. I stopped that easily enough. One of the problems is good small ISPs being taken over by larger ISPs who have a terrible attitude. My first was, my second was, my third was bad on its own and now my forth has been.
          Firmware updates for MacBooks that don’t dual boot OSX   
My Macbook currently hosts Ubuntu, and there is no copy of OSX on it. I keep a USB stick with OSX installed on it, and today I got to test if I could use this to install a firmware update. Short version: It just works. Slightly longer version: I just rebooted a few times, holding […]
          [ubuntu] Strange Boot/Lockup issues   
Hello All: This just started a few days ago. Upon boot up, I can open programs such as Thunderbird, Firefox, LibreOffice 5, but cannot navigate anywhere. Thunderbird will auto download emails, but I cannot open anything. I usually have to reboot 2 or 3 times before I can get anything to...
          Browsers extremely slow and intermittently disconnects 14.04   
Hello, I am not very familiar with Ubuntu other than performing a few installations. I have been having a very odd issue with this most recent install of version 14.04. Anytime I try to use any internet browser it will work for a few seconds to a minute then it will just stop working (like the...
          Unmet dependencies   
Hi, I'm relatively new to Ubuntu admin. It is one of the downsides of Ubuntu that you need to be a techie to do almost anything outside the GUI such as installing printers. Until that is fixed Ubuntu, and Linuxm isn't going to make it mainstream. Can't understand why doing a basic function like...
          [ubuntu] Issue whenever restarting laptop for installation   
Hi, I've been trying to install Ubuntu 17.04 on my new laptop. I created a 150 GB partition on my hard disk and then installed Ubuntu through USB. Everything went fine until I was asked to restart. Upon restarting the screen was black and it read Code: --------- platformMSFT0101:00: failed...
          [ubuntu] Ubuntu drops USB mount midway through file transfer   
Strange problem when I use "sneaker net" to transfer files between two Ubuntu machines at home. I move files onto a 16GB USB stick on one machine, verify the files are there (e.g., open directly from the stick), unmount the stick, plug it into the second machine, start copying files, then suddenly...
          WNDA3100v2 – N600 don't wanna connect to some WiFi networks   
Hi, i just installed the Ubuntu 14.04 (dualboot with w8) on my PC and i have a problem with my WiFi adapter, I can't connect to my TP-LINK WR740N, i somehow got the WNDA3100v2 work with Windows Wireless Drivers and a broadcom driver found in a post. I can connect to my Android and iPhone hotspot...
          [ubuntu] Laptop runs hot Asus UX501VW-D871T nvidia GeForce GTX 960M   
Laptop runs hot. It sometimes reaches 92 degrees... usually hovers around 68-72 degrees. I tried multiple things but couldnt bring the temperature down. The only option is... I shut down and wait for the laptop to cool down. cpu fan 3300rpm cpu fan min 800rpm cpu fan max 4200rpm Please,...
          [UbuntuGnome] Create hot key to disable trackpad   
I recently got a new laptop (XPS Dev Ed.) and need a way to quickly turn the trackpad on/off. On my previous laptop I had an fkey that was originally designed to do this but the XPS doesn't have this key. Anyone know how I might be able to create a hot key to turn the trackpad on/off? Thanks!
          [ubuntu] Ransomware   
Hey guys with Ubuntu or any other Linux distro this ransomware does it affect Linux systems and what to do to prevent it sorry for the dumb question just trying to cover my tracks even with Linux
          whatsapp al pc   
Bon dia, Vull rebre els missatges de whatsapp al meu PC, sense que calgui tindre cap smartphone. He trobat una aplicació per al PC per fer-ho, però diu que només funciona per al windows i el macintosh: https://www.whatsapp.com/download/?l=ca N'hi ha cap per a l'ubuntu?
          [other] Facing issue while installing MySQL   
Hi All, I am new to MySQL. I installed MySQL 5.7 on Ubuntu 16.04.2 LTS but forgot the root password set. I tried several ways to reconfigure, but unlucky. So decided to remove and install it again. For removing MySQL packages, I executed the following: sudo apt-get remove mysql* sudo...
          [ubuntu] Looking for easy to use security camera software for security camera system on ubuntu   
guys, I am looking to used my unbuntu 14.x server as a security camera server. I hear about zoneminder and that it is hard to install . Is there any easy to used software for unbuntu that works with wireless wifi IP cameras and webcams
          [all variants] JACK audio slowed down and pitched down   
Total newbie to JACK audio. I'm looking to get JACK audio inside a Xubuntu 16.04 VirtualBox VM (see this thread (https://ubuntuforums.org/showthread.php?t=2361064) for some context). I can hear audio from JACK, but it sounds like it's pitched waaaaay down and slowed waaaaay down. In qjackctl,...
          Ήξερες ότι.............   

Οι πίθηκοι θέλουν ισότητα στις απολαβές. Όταν δίνεται διαφορετική αποζημίωση για την ίδια δουλειά, οι "ριγμένοι" πίθηκοι αναστατώνονται.

ΠΗΓΗ
http://agones.gr/news/1007595/25_alitheies_pou_den_ixeres_gia_ta_zoa_kai_tha_ftiaxoun_ti_mera_sou

          Ubuntu 13.10   
Ubuntu 13.10


Встречайте новую версию Ubuntu 13.10 .
Основные изменения были ориентированы на поиск в сети интернет и интеграцию с социальными сетями. Вместе с новой версией идёт обновлённая версия пакета LibreOffice 4.0 , которая включила в себя множество новых шаблонов и более красивый UI (пользовательский интерфейс). Советовал бы подождать месяца 2 перед тем как ставить ОС на ваши рабочие компьютеры :-) .
          Ubuntu Server Natty Narwhal 11.04 [i386+amd64] (2xCD)   
Ubuntu Server Natty Narwhal 11.04 [i386+amd64] (2xCD)


Ubuntu Server разработана специально для серверов. Занимает в среднем столько же места, сколько и Ubuntu Desktop (1 CD).
          公网固定ip的主机 Cygwin ssh 远程登录局域网内的EPC283-CL失败   
现象:公网固定ip的主机通过Cygwin ssh 能成功远程登录ubuntu虚拟机, 却无法远程登录 EPC283-CL。 EPC283-CL与虚拟机处于同一个局域网内 目的:使用Cygwin ssh实现远程登录ECP283-CL EPC283-CL 抛错如下: [root@M28x ~]# debug1: client_input_channel_open: ctype ...
          Ubuntu จะเปลี่ยนตัวจัดการแสดงผลไปใช้ GDM ของ GNOME แทน LightDM เดิม   

Ubuntu เตรียมเปลี่ยนจาก LightDM มาใช้ GDM (GNOME Display Manager) เป็นตัวจัดการแสดงผล (display manager) หลักบน Ubuntu เวอร์ชัน 17.10 และ 18.04 LTS พร้อมกับการเปลี่ยนจาก Unity มาเป็น GNOME

ในตอนแรกนั้น ทีมงานพยายามจะทำให้ GNOME Shell เป็น LightDM Greeter แต่เนื่องจากการแยกโค้ดนั้นไม่ง่าย ซึ่งจากการชั่งน้ำหนักความเสี่ยงและปริมาณงานที่ต้องทำในการแก้ไข GNOME ให้รองรับ LightDM ทีมงานจึงตัดสินใจเลือกใช้ GDM แทน

ปัจจุบัน ระบบจัดการแสดงผลนี้มีหน้าที่สำคัญอย่างหนึ่งคือการแสดงผลในหน้าล็อกอิน ดังนั้นหากเปลี่ยน LightDM เป็น GDM จริง เราน่าจะเห็นหน้าจอล็อกอินและหน้าจอล็อกของ Ubuntu เปลี่ยนไปด้วย

ส่วน LightDM ในอนาคตจะเหลือเพียงการสนับสนุนเฉพาะการแก้บั๊กจากทีม Ubuntu Desktop เท่านั้น และจะสนับสนุนเฉพาะรุ่น Ubuntu ที่อยู่ในระยะการสนับสนุนเท่านั้น

ที่มา - OMG! Ubuntu!


          Skype for Linux จะหยุดทำงาน 1 ก.ค. 2017 ผู้ใช้ต้องอัพเกรดเป็น Skype Beta   

Skype ประกาศว่าไคลเอนต์ตัวเก่าบนลินุกซ์ (Skype for Linux 4.3) จะหยุดทำงานในวันที่ 1 กรกฎาคม 2017 ผู้ใช้ลินุกซ์จำเป็นต้องอัพเกรดไปใช้ไคลเอนต์ตัวใหม่แทน

ไคลเอนต์ตัวเก่าเขียนด้วย Qt และทำงานแบบเนทีฟ มันถูกอัพเดตครั้งสุดท้ายในปี 2014 ส่วนไคลเอนต์ตัวใหม่นับเลขเวอร์ชันเป็น 5.x เขียนด้วย Electron และยังมีสถานะเป็น Skype Beta (เวอร์ชันล่าสุดคือ 5.2) อย่างไรก็ตาม Skype Beta อาจยังขาดฟีเจอร์สำคัญหลายตัวที่มีในแพลตฟอร์มอื่น

Skype Beta สามารถดาวน์โหลดได้จากหน้าเว็บของ Skype มีให้เลือกทั้งเป็น .deb และ .rpm

ที่มา - OMG Ubuntu

No Description

Topics: 

          ช่องโหว่ใน sudo ทำให้ผู้ใช้ทั่วไปเขียนไฟล์ด้วยสิทธิ์ root ได้, ยึดเครื่องได้ในที่สุด   

ทีมความปลอดภัย Qualys รายงานถึงช่องโหว่ของโปรแกรม sudo ที่ลินุกซ์ใช้เพื่อเปิดสิทธิ์ให้ผู้ใช้ทั่วไปเข้าถึงสิทธิ์ root บางส่วน กลับสามารถเขียนทับไฟล์ใดๆ ในเครื่องได้ จนนำไปสู่การยึดสิทธิ์ root ไปทั้งหมด

ความผิดพลาดนี้เกิดจากการอ่านข้อมูลจาก /proc/[pid]/stat ที่ไม่ได้คำนึงว่าชื่อไฟล์อาจจะมีช่องว่างอยู่ ทำให้การอ่านค่าผิดพลาด เมื่อแฮกเกอร์สร้างสคริปต์ที่เรียก sudo โดยตัวสริปต์เป็นชื่อไฟล์ที่มีช่องว่างเพื่อเจาะระบบ

ตอนนี้ลินุกซ์หลักๆ ล้วนออกแพตช์หมดแล้ว Ubuntu นั้นได้รับผลกระทบตั้งแต่เวอร์ชั่น 14.04 ขึ้นไป ส่วน RHEL ได้รับผลกระทบในเวอร์ชั่น 5, 6, และ 7

ที่มา - OpenWall, Red Hat Security Advisory, Ubuntu

Topics: 

          Windows 10 S ไม่มีคอมมานด์ไลน์, ไม่มี PowerShell, ไม่สามารถรันลินุกซ์ได้   

จากข่าว Ubuntu, SUSE, Fedora ลง Windows Store ไมโครซอฟท์ออกมาอธิบายข้อมูลเพิ่มเติมว่าความสามารถนี้ใช้ไม่ได้กับ Windows 10 S แม้ว่าจะเป็น "แอพ" จาก Store ก็ตาม

เหตุผลของไมโครซอฟท์คือไม่ใช่แอพทุกตัวจาก Store ที่จะใช้งานได้กับ Windows 10 S ตัวอย่างแอพเหล่านี้คือคอมมานด์ไลน์ เชลล์ คอนโซล ซึ่งรวมถึง cmd.exe, PowerShell และ WSL ด้วย

ไมโครซอฟท์อธิบายว่า Windows 10 S ออกแบบมาสำหรับผู้ใช้ที่ไม่ใช่กลุ่มเทคนิค (non-technical users) เช่น ครู นักเรียน ศิลปิน ฯลฯ ที่ไม่ต้องการปรับแต่งแก้ไขคอมพิวเตอร์ของตัวเองมากนัก แต่เน้นคอมพิวเตอร์ปลอดภัย เสถียร ทำงานได้เร็ว

ส่วนกลุ่มนักพัฒนาแอพ แอดมิน หรือผู้เชี่ยวชาญด้านไอที ไมโครซอฟท์แนะนำให้ใช้ Windows 10 ตัวปกติแทน

ที่มา - Microsoft

No Description


          Ubuntu, SUSE, Fedora ลง Windows Store ติดตั้งแล้วใช้งานบน Windows 10 ได้เลย   

นอกจากข่าวช็อควงการอย่าง iTunes ลง Windows Store แล้ว ไมโครซอฟท์ยังประกาศข่าวว่าลินุกซ์ 3 ค่ายดังคือ Ubuntu, SUSE, Fedora ก็ลง Windows Store ด้วย

Windows 10 มีฟีเจอร์ Linux Subsystem ที่ใช้ Ubuntu อยู่แล้ว เพียงแต่ผู้ใช้ต้องติดตั้งโค้ดส่วนนี้เพิ่มเองที่มีขั้นตอนพอสมควร การเพิ่มตัวเลือกให้กดง่ายๆ บน Windows Store จึงช่วยให้ผู้ใช้สะดวกมากขึ้น

ตัว Linux Subsystem สามารถเปลี่ยนจาก Ubuntu เป็นดิสโทรอื่นได้ (เช่น SUSE) ทำให้ไมโครซอฟท์ชักชวน SUSE และ Fedora มาเป็นตัวเลือกอีกสองตัวบน Windows Store ให้ผู้ใช้เลือกดิสโทรที่ต้องการได้เลย

หมายเหตุ: ฟีเจอร์นี้คือเฉพาะตัวแกนของลินุกซ์ที่รันบน Windows Subsystem for Linux นะครับ ไม่ใช่ดิสโทรลินุกซ์ตัวเต็ม

No Description

No Description


          Mark Shuttleworth ยืนยัน Ubuntu ไม่ทิ้งเดสก์ท็อป แต่จะโฟกัสที่ Cloud/IoT มากกว่า   

หลัง Ubuntu พับแผนการด้านเดสก์ท็อป-มือถือครั้งใหญ่ ก็เกิดคำถามตามมามากมายว่าอนาคตของ Ubuntu จะเป็นอย่างไรต่อไป ล่าสุด Mark Shuttleworth ซีอีโอจึงให้สัมภาษณ์ในประเด็นนี้

Shuttleworth เล่าว่าโลกคอมพิวเตอร์ในปัจจุบันแบ่งออกเป็น 3 ขาคือ personal computing, data center/cloud และ edge/IoT ซึ่งตอนนี้ Ubuntu กลายเป็นระบบปฏิบัติการมาตรฐานสำหรับโลก data center/cloud ไปแล้ว และในโลกของ edge/IoT ก็น่าจะมีบทบาทไม่น้อย

เขายอมรับว่าวางแผนผิดพลาดไปในตลาด personal computing ที่พยายามควบรวมพีซี-มือถือ-แท็บเล็ต แต่ก็ยืนยันว่าเดสก์ท็อปยังเป็นตลาดสำคัญที่ Ubuntu จะไม่ทิ้ง อย่างไรก็ตาม ในแง่ธุรกิจแล้ว Ubuntu จะโฟกัสไปที่ตลาด data center/cloud ที่เข้มแข็ง และตลาด edge/IoT ที่กำลังอยู่ในช่วงเริ่มต้น

ที่มา - OMG Ubuntu


          Ubuntu 17.10 วนกลับมาที่ตัว A ได้ชื่อ Artful Aardvark    

Ubuntu 17.04 ใช้โค้ดเนม Zesty Zapus ซึ่งถือว่าเดินทางมาถึงตัว Z แล้ว สิ่งที่แฟนๆ สงสัยคือ Ubuntu 17.10 รุ่นหน้าจะใช้โค้ดเนมอะไร จะวนกลับมาเป็นตัว A หรือไม่

Mark Shuttleworth ผู้นำโครงการ Ubuntu ยังไม่ประกาศข้อมูลนี้ แต่รอบนี้ข้อมูลจากระบบติดตามบั๊กของ Ubuntu โผล่มาให้เห็นแล้วว่าเป็นชื่อ Artful Aardvark

ตัวอาร์ดวาร์ก เป็นสัตว์เลี้ยงลูกด้วยนมชนิดหนึ่งที่อาศัยอยู่ในทวีปแอฟริกา ที่มาของชื่อมีความหมายว่า "หมูดิน" ในภาษาแอฟริคานส์

ที่มา - Ubuntu Launchpad, OMG Ubuntu, ภาพจาก Wikipedia

No Description


          Ubuntu 17.10 จะย้ายไปใช้ระบบแสดงผล Wayland แทน Mir ของเดิม   

จากกรณี Ubuntu เลิกใช้ Unity ก็มีคำถามตามมาถึงระบบแสดงผล (display server) ที่ถูกพัฒนาขึ้นมาใช้ร่วมกันคือ Mir

ก่อนหน้านี้ Mark Shuttleworth เคยให้ข้อมูลมารอบหนึ่งแล้วว่า Unity จะหยุดพัฒนาอย่างถาวร ส่วน Mir จะยังอยู่ต่อแต่ย้ายไปจับตลาด IoT แทนเดสก์ท็อป

ล่าสุด Will Cooke ผู้จัดการทีมเดสก์ท็อปของ Ubuntu ยืนยันแล้วว่า Ubuntu จะย้ายไปใช้ Wayland ระบบแสดงผลอีกตัวแทน

ประกาศนี้ไม่เหนือความคาดหมายนัก เพราะ Wayland ถูกใช้ในดิสโทรลินุกซ์อื่นๆ เช่น Fedora อยู่แล้ว และการเปลี่ยนแปลงน่าจะเสร็จทันใน Ubuntu 17.10 รุ่นหน้า

ที่มา - OMG Ubuntu


          Ubuntu GNOME ออกรุ่น 17.04, ประกาศทิศทาง รวมทีมเข้ากับ Ubuntu หลัก   

โครงการ Ubuntu GNOME ประกาศออกรุ่น 17.04 โดยใช้ GNOME 3.24 รุ่นล่าสุด พร้อมประกาศทิศทางในอนาคตว่าจะไปรวมกับ Ubuntu ตัวหลักแล้ว

ทีมงาน Ubuntu GNOME และ Ubuntu Desktop (Unity) จะถูกรวมเข้าเป็นทีมเดียวกัน ตอนนี้ทางทีม Ubuntu GNOME กำลังวางแผนการทำงานกับทีมฝั่ง Canonical อยู่ว่าจะทำอะไรในช่วงไหน และจะประกาศข่าวต่อไป

ผู้ที่ใช้ Ubuntu 16.04 LTS หรือ Ubuntu GNOME 16.04 LTS จะได้อัพเกรดเป็น Ubuntu 18.04 LTS ทีเดียวเลยในปีหน้า ถือเป็นจุดสิ้นสุดของการแยกดิสโทรสองสายนั่นเอง

ที่มา - Ubuntu GNOME


          Error Msg Hamsphere 3.0 (1 reply)   
I have been trying to login to Hamsphere 3.0 and it comes up with the following:

Error Msg
Module: Rx Audio driver (codec)
Fault: Cannot initiate the audio driver

I have tried /play1-4 but it still does not come up . I have not had this trouble before .
Am using Linux Ubuntu Mate 16.04.
I am also running Hamsphere 4.0 but I prefer Hamsphere 3.0 at this time because there are more people using it.

Any help to get rid of this error msg would be appreciated .

Thank you.

Mal Ellis
3D2MR
          Riprodurre file H.265/HEVC su Ubuntu con VLC   
Erano tipo anni che VLC non mi deludeva e invece ieri sera mentre mi accingevo a guardare la solita puntata di The Walking Dead ecco che il programma mi si presenta cosi come un pulcino bagnato: Povero non si ritrovava il codec giusto per la decodifica dei file hevc (oddio erano decenni che non installavo codec a […]
          Gestire Corsair Strafe RGB su Linux Ubuntu   
Il mio primo autoregalo del 2016 è stata una bella tastiera meccanica Corsair Strafe RGB. La masturbazione con i led RGB e gli switch cherry mx brown è arrivata proprio al top, senza contare che quando scrivo (tipo ora) è una gioia per l’udito sentirla cantare. Finiti i primi muniti erettivi intensi mi sono chiesto: […]
          Cambiare DNS su Linux Ubuntu   
Taaanto tempo fa, in una galassia lontana lontana (Cit) scrivevo su questo angolo del web tutto mio. Oggi invece scrivo in altri posti perché sostanzialmente me pagano per farlo. Ora torno a scrivere due righe su questo blog dopo un anno perché ultimamente mi sono ritrovo a non poter più visualizzare uno dei mie siti […]
          Ubuntu 11.04 will be Natty Narwhal   

Mark Shuttleworth just announced the name of the next version of Ubuntu. After Lucid Lynx (10.04) and Maverick Meerkat (10.10) comes Natty Narwhal (11.04). This version of the OS should arrive April 28, […]

The post Ubuntu 11.04 will be Natty Narwhal appeared first on Geek.com.


          Installing djb-dns on a Linux machine.   

Down below is a script you can use to install djb-dns on a Linux system (like Ubuntu).

Specifically, it will install dnscache (a local caching nameserver) which resolves any domain name into an IP address. This is much like Google's public 8.8.8.8 DNS server.

Background on DNS lookups

To be clear: dnscache is not an "authoritative" dns server A dns cache is a simply a middle-man that executes global dns lookups on behalf of an incoming query, and caches the result for subsequent queries. See this clarification.

When a program does a dns lookup (turning a domain name into an IP, or vice versa) it uses a dns client library (e.g. calling the UNIX function gethostbyname()) to connect to a ("recursive") domain name server. That server (typically hosted by your ISP) does all the dirty work of first talking to the root-name-servers and going down the tree of DNS lookups until the full domain name is completely resolved.

The file /etc/resolv.conf contains the IP address(es) of the domain name server(s) your system is using. It is a small file that typically looks something like:

nameserver a.b.c.d
nameserver e.f.g.h

Why do I need to run my own dns cache?

The dns cache servers that your ISP is hosting typically aren't very good. Those servers are overloaded, not well maintained, etc... If you are doing a high volume of dns-lookups they won't keep up. For instsance, you are running a web crawler, or doing reverse-lookups on all the IP addresses that visit your site. Your ISP's servers will introduce latency and flakiness. I've personally dealt with 3 ISPs whose servers started returning errors because my volume was too high.

I've even run my own dns cache on my home Linux desktop because my home ISP's was so bad. (Nowadays I just use 8.8.8.8 for my home networks.)

What's so special about djb-dns?

It's rock-solid. It's written by this crazy-smart guy who knows his shit, and even has an unclaimed $1000 prize to find a security bug.

I've used it multiple times and haven't had any problems. The only downside is it's a pain-in-the-ass to install. Thankfully, I've gone through the headache for you.

The Install Script

# Must be run as root
# Also see http://hydra.geht.net/tino/howto/linux/djbdns/

#Create a /package directory:
mkdir -p /package
chmod 1755 /package

cd /package
wget http://cr.yp.to/daemontools/daemontools-0.76.tar.gz
gunzip daemontools-0.76.tar.gz
tar -xpf daemontools-0.76.tar
rm daemontools-0.76.tar
cd admin/daemontools-0.76
# Apply dumb patch to make things compile
cd src; echo gcc -O2 -include /usr/include/errno.h > conf-cc; cd ..
./package/install

cd /package
wget http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz
rm -rf ucspi-tcp-0.88
tar xfz ucspi-tcp-0.88.tar.gz
cd ucspi-tcp-0.88
# Apply dumb patch to make things compile
echo gcc -O2 -include /usr/include/errno.h > conf-cc
make
make setup check

cd /package
wget http://cr.yp.to/djbdns/djbdns-1.05.tar.gz
gunzip djbdns-1.05.tar.gz
tar -xf djbdns-1.05.tar
cd djbdns-1.05
# Apply dumb patch to make things compile
echo gcc -O2 -include /usr/include/errno.h > conf-cc
# Allow more simultaneous dns requests
sed -i -e "s/MAXUDP 200/MAXUDP 600/g" dnscache.c
make
make setup check

########## Install Users and Service directories ###########
groupadd dnscache
useradd -g dnscache dnscache
useradd -g dnscache dnslog
/usr/local/bin/dnscache-conf dnscache dnslog /var/dnscache
ln -s /var/dnscache /service

# Fix the nameservers to point to current ICANN structure 
# This assumes you have dig installed 
# Patch in the current list of root servers  
for a in a b c d e f g h i j k l m
do
  dig +short $a.root-servers.net.
done > /var/dnscache/root/servers/\@

# Increase the cache to 100MB
echo 100000000 > /service/dnscache/env/CACHESIZE
echo 104857600 > /service/dnscache/env/DATALIMIT

# Change multilog to keep more logs
echo "#!/bin/sh" > /service/dnscache/log/run
echo "exec setuidgid dnslog multilog t s10000000 ./main" >> /service/dnscache/log/run
Now all the tools and binaries are installed. To verify that the tools were installed you can do:
dnsip www.google.com
Now you just have to kick-off the dnscache server and update /etc/resolv.conf. You will want to run the following script at system startup (if you don't, the file /etc/resolv.conf might get over-written by your system):
# Must be run as root
rm -rf /etc/resolv.conf.prev
mv /etc/resolv.conf /etc/resolv.conf.prev
echo "nameserver 127.0.0.1" > /etc/resolv.conf

## init q  # (is this needed?)
/command/svscanboot &
sleep 5
svc -u /service/dnscache   # FYI: -t does a reboot
svstat /service/dnscache
svc -t /service/dnscache/log
Enjoy!
          Devices: Ubuntu Core, Yocto, Microsoft, and Tizen   
  • Ubuntu Core opens IoT possibilities for Raspberry Pi (CM3)

    Ubuntu Core running on the Raspberry Pi Compute Module 3, which is a micro-version of the Raspberry Pi 3 that slots into a standard DDR2 SODIMM connector, means developers have a route to the production and can upgrade functionality through the addition of snaps- the universal Linux application packaging format.

    Device manufacturers can also develop their own app stores, all while benefiting from the additional security of Ubuntu Core.

  • Rugged marine computer runs Linux on Skylake-U

    Avalue’s “EMS-SKLU-Marine” is an IEC EN60945 certified computer with 6th Gen Core CPUs, -20 to 60°C support, plus 2x GbE, 4x USB 3.0, M2, and mini-PCIe.

    The EMS-SKLU-Marine is designed for maritime applications such as control room or engine room, integrated bridge systems, propulsion control or safety systems, and boat entertainment systems. Avalue touts the 240 x 151 x 75mm box computer for being smaller than typical boat computers while complying with IEC EN60945 ruggedization standards.

  • Module runs Yocto Linux on 16-core 2GHz Atom C3000 SoC

    DFI’s rugged, Linux-ready “DV970” COM Express Basic Type 7 module debuts the server-class, 16-core Atom C3000, and supports 4x 10GbE-KR and 16x PCIe 3.0.

    DFI promotes the DV970 as the first COM Express Basic Type 7 module based on the Intel Atom C3000 “Denverton” SoC, but it’s the first product of any kind that we’ve seen that uses the SoC. Intel quietly announced the server class, 16-core Atom C3000 in late February, with a target of low-end storage servers, NAS appliances, and autonomous vehicles, but it has yet to publicly document the SoC. The C3000 follows other server-oriented Atom spin-offs such as the flawed, up to 8-core Atom C2000 “Rangeley” and earlier Atom D400 and D500 SoCs.

  • Why Microsoft's Snapdragon Windows 10 Cellular PC Is Walking Dead

    Intel's veiled threat to file patent infringement suit against any company emulating x86 Win32 software on ARM-based computers has probably slayed Microsoft's Cellular PC dream.

  •  

  • New BMW X3 brings latest digital and driver assistance tech, connects to your Gear S2 / S3

          Apple Wireless Keyboard + Ubuntu Intrepid   
I had some trouble pairing at first, but found this to work:
  1. Turn keyboard off
  2. Open up the new bluetooth device wizard
  3. Turn the on
  4. Select the keyboard
  5. Click Next
  6. While the wizard is waiting, enter 0000 and Enter on the keyboard
  7. Enter 0000 and Enter a second time.
The main issue seems to be that the OS doesn't prompt you at the correct time to enter the pairing password.

          Wormhole is a Fast, Secure Way to Send Files to Other Users Through the CLI   

Next time you a file you want to send to a friend but don’t fancy the hassle of using Dropbox, try Wormhole. It’s a fast, free and secure way to send files to Linux and macOS users. For such a small python app it is truly cosmic: you just open a wormhole on your desktop […]

This post, Wormhole is a Fast, Secure Way to Send Files to Other Users Through the CLI, was written by Joey Sneddon and first appeared on OMG! Ubuntu!.


          Bitfield   
I've been helping jk with the simple but wonderful bitfield (me helping mostly equated to telling him what features I wanted, followed by him telling me he'd already implemented them and then laughing at me).
We now have bash completion, vim modes and Debian/Ubuntu packages which can be grabbed from here:
deb http://neuling.org/devel/bitfield/ ./
We are looking to improved the register database, so if you have any register definitions you'd like added, email jk or even better, grab the Mecurial tree and submit it as a patch.

          Monday, December 05, 2005   
 
I bought myself an IOGear GME225B Bluetooth mouse. Works nicely under Ubuntu Linux (Breezy) with a simple
sudo hidd --connect XX:XX:XX:XX:XX:XX
command while holding down the connect button on the bottom. I didn't even have to restart X. It recharges over USB and can still be used while being charged. My laptop has Bluetooth built in, so it's a very neat solution requiring no cables or dongles (except when charging).
 

          Dualboot encrypted Windows and Ubuntu   
I prefer to use Linux software for a lot of reasons.  But sometimes I need to use Windows, so I have both of them installed on my laptop.  It took me much time to find out how to encrypt these two and still dualboot them. Well, I found out how …
          SpeedTouch 120g in Ubuntu   
Do you have the SpeedTouch 120g at home or the office?  That’s a pretty nice wifi adapter which supports the 802.11g standard and even WPA2.  But why doesn’t it work in the latest version of Ubuntu? Read my article, it can probably help you to get this adapter working in …
          Techblog is alive!   
Finally it’s there: my own techblog site!  I’ve worked hard to start with some nice content which I finished today. First article is about how to get the SpeedTouch 120g wifi adapter working in Ubuntu 12.04.  Second article is about dualbooting Windows and Ubuntu while both of them are encrypted.  …
          LINE BOTでリッチメニューと新メッセージタイプのカルーセルタイプを使ってみる   

LINE DEVELOPER DAYで知った「リッチメニュー」と新しいメッセージタイプ「カルーセル」を試したかったので、LINE BOTを作成してみた。

いわゆるチャットBOTで最初に作るものといえば、出退勤BOTか近隣レストラン検索かと思うので、今回は近隣レストラン検索を行うことにしました。

(BOT自体は未公開です)

利用例

f:id:rubellum:20161107121034p:plain

図1:池袋でラーメンを探すの巻

 

最初に位置情報を送ってもらい、次に検索キーワード(例:ラーメン)と送ると、ぐるなびの検索結果が返ってきます(お店の精度は……泣)

「ラーメン」とメッセージを送ったあとの、「画像+タイトル+説明」の部分がカルーセルタイプです。

 

カルーセルタイプ以外にも、Yes or Noの「Confirm Type」や「Button Type」があります。

LINE、chatbotの開発・普及に向けて新たな展開を発表 | LINE Business Center

 

カルーセルタイプはスワイプで横に移動でき、今回は3件分の結果を横並びで見ることができます。

 

リッチメニュー

最下部の「検索メニュー」というところが「リッチメニュー」と呼ばれるもので、ここをタップするとシュッとメニューが表示されます。メニュー名は管理画面から設定できます。

 

f:id:rubellum:20161107120021p:plain

図2:アイコン(イラスト)のせいでリッチに見えないリッチメニュー

 

ラーメン検索、とんかつ検索となっているところがボタンになっており、このボタンを押すと「ラーメン」や「とんかつ」といった特定のメッセージを送信することができます。

そのため、ユーザーにいちいちメッセージを入力してもらわなくても、ボタンひとつで定型のメッセージを送ってもらうことができます。

 

↓のようにきちんとグリッドで表示するとリッチになります。

image.itmedia.co.jp

 

リッチメニュー自体は、LINE@の管理画面から設定することができます(左メニューのリッチコンテンツ作成)。

今回のケースでは、左半分・右半分で2つのボタンが設定されていますが、さまざまな形式で表示方法を設定することができます。

下記は画像を分割するタイプの種類です。他にも「アイコン+テキスト」による表示形式も選択できます。

 

f:id:rubellum:20161107145806p:plain

 

リッチメニューはUIの第一印象がとてもよかったので、LINEやチャットBOTという用途に限らず、アプリに組み込んで何かするサードパーティ向けのUIとして広く使えるのではないかと思いました。

(その検証のため、わざわざこのBOTを作ったのでした)

開発環境メモ

サーバー:Vultr ( ← なぜか東京リージョンを持っている海外のVPS )

OS:Ubuntu 14.04

言語:PHP 7(Slim Framework 3)

レストラン検索API:ぐるなび http://api.gnavi.co.jp/api/

 ソース:https://github.com/rubellum/hirudoki ごちゃごちゃしてます。

 

LAMP環境です。

SSLが必要なので、新たにドメインを取りつつ、Let's Encryptで証明書を取得。

今回初めてLet's Encryptを使用してみましたが、certbotの使い方がいまいちわからず、正直なところBOT作成よりも手こずりました。

certbotのコマンド実行時、Apacheを立ち上げていたよくなかったらしく、停止して上記の手順を踏んだらうまく行きました。

certbotが証明書生成のための確認用サーバーとやりとりするときにapacheとポートが被って自身のhttpサーバーを立ち上げられないとかそんなところじゃないかと予想してますが、正直なところ原因不明です。

参考)

How To Secure Apache with Let's Encrypt on Ubuntu 14.04 | DigitalOcean

最後に

LINE BOTを作成してみました。

世間はチャットBOTが流行る流行ると言っていますが、意外にみんなアイデアが思いつかなくて悩ましい感じがしてますね。

LINE DEVELOPER DAYでもらったBeaconも試さないとナ…。

 

 


          「半角/全角」よりもMacの「かな」「英数」の方が好き   

Macには「半角/全角」キーがない。
代わりに、スペースの左側に英字入力(半角)に切り替える「英数」キー、右側に日本語入力(全角)に切り替えるを「かな」キーが用意されている。
「半角/全角」は日本語入力と英字入力を"トグル"するが、Macの「かな」および「英数」キーは"トグル"しない(
それぞれのモードに移行するだけ)。


僕は「半角/全角」より、この「かな」「英数」を使う方が好きだ。
「半角/全角」は位置が少しだけ遠くて、打ちにくいと感じてるからだ。
スペースの両隣にある「英数」「かな」は親指で押せるため、こちらの方がストレスなく打つことができる。
僕はすべての環境において、このキー配置を使いたいと考えている。


僕の身近にあるキーボードはMacBook ProRealforce 91UDK-Gの2つ。
RealforceUbuntuマシン(友人には散々Arch Linuxを勧めておきながら)で使っている。
したがって、UbuntuRealforceという環境で「かな」「英数」のキー配置を実現する必要がある。
幸いなことに設定は簡単だった。


まず、「かな」「英数」として使いたいキーのkeycodeを調べる。
これはxevを使うと簡単に調べられる。xevの実行後、調べたいキーを入力すればよい。

$ xev

僕の場合、keycode "102"を「英数」、keycode "100" を「かな」に割り当てればよさそうだった。


次に、ホームディレクトリに「.Xmodmap」を作成する。
「.Xmodmap」は起動時に読み込まれる設定ファイルだ。
keycodeと「Return」や「Escape」といったイベント(?)を結びつけることができる。

「英数」に"Hankaku"、「かな」に"Zenkaku"を割り当てた("Zenkaku", "Hankaku"は適当に決めた)。

! 日本語入力(全角)
keycode 100 = Zenkaku
! 英数入力(半角)
keycode 102 = Hankaku


最後に、IMEのON/OFFを"Zenkaku"および"Hankaku"に対応させる。
これはibusの設定画面から設定可能だ。設定画面はibus-setupで表示できる。

$ ibus-setup

一般タブの「Enable:」「Disable:」がそれぞれ「かな」「英数」に対応しているので、"Zenkaku", "Hankaku"を対応させる。


これで快適なUbuntu生活を送れそうだ。


          Hiring Now! Full Stack Developer with DevOps interests   
We're looking for intermediate to senior full stack developer with DevOps interests. This is a brand new role that we have created to help facilitate significant growth across the Generator business, where you'll form a critical part of our innovation roadmap.

What's the job?Your primary role as a full stack developer is to work alongside our CIO, senior management team, external software developers, and internal IT operations team across three distinct areas.

Extending the capabilities of our private cloud-based ERP application and customer portal as you integrate a number of third party applications and APIs into our the GenHub ecosystem.

Building and/or augmenting various independent internal systems with a focus on service delivery automation, (some from the ground up), and connecting them into a wider integrated ecosystem.

Developing beautiful reporting dashboards for our senior executive team that will pull, correlate, and analyse big data from our various internal and external systems. Presenting results allow our team to manage and forecast all our business operations across multiple sites.

Tool's you'll be working with on a daily basis:

Node.JS

Python / Django

Redis

MySQL

Ubuntu / Debian Linux

and anything else you want to bring to the table.In a nutshell we're looking for a cat riding a unicorn with a golden desert eagle full stack developer with a diverse skill set, to help us build and improve the overall automation and integration of all our systems and services.

You can learn more about this exciting opportunity, company (incl video) and what it offers at: https://recruit.chillifactor.co.nz/jobs/view?id=1412/


          April 23rd is Download Day   
I seem to be downloading a couple of Gigs of stuff this morning. First off is Ubuntu 9.04, which was released today. I am grabbing both the “desktop” and “server” editions. I plan on using the server edition when I rebuild Tnir. Next up is , and this time there are Cocoa, 64-bit Binaries for […]
          Knot3D   
Version 0 (1 Jul 2013) for Linux and Windows.
Knot3D is an open source Celtic knot rendering application. It'll render standard knots on a 2D plane with a 3D weave. It also generalises these to allow proper 3D knots to be rendered. More info...
Download: binary, source, screenshot.
          instalacion incompleta oracle   

instalacion incompleta oracle

estoy tratando de instalar oracle 11gXE en ubuntu pero me da este error:
Oracle Database 11g Express Edition Configuration ------------------------------------------------- This will configure on-boot properties of Oracle Database 11g Express Edition. The following questions will determine whether the database should be starting upon system boot, the ports it will use, and the passwords that will be used for database accounts. Press <Enter> to acce...

Publicado el 27 de Junio del 2017 por Josue

          Blade ZipKit - Detect All Java Versions for Linux   

Blade ZipKit Package Info

Name: Detect All Java Versions for Linux
Type:Extended Object
BSA Compatible Version: 8.9 (should be backwards compatible)
Version: 1.0

Created by: Tony Stevens

Tested on version: 8.9.x
Tested against host running on: RedHat 6,7, CentOS 7, Ubuntu 14.04

 

Instructions for importing the package:

  1. Download the two attached zip files
  2. Extract the content to a location accessible by the BSA Console
  3. From the BSA Console, select Configuration from the menu bar and click Config Object Dictionary View
  4. Select Extended Objects and Linux as shown here and click on the Import Configuration Object
  5. Browse to the location of the un-zipped folder (example: Detect Installed Java Versions)
  6. Click Finish
  7. The imported Extended Object should now be visible in the Extended Object view.
  8. Copy the nsh script to the app server, preferably to the /NSH/storage/scripts directory.
  9. Modify the path in the EO to point to the location of the script file.
  10. To test, select a Linux system in BladeLogic and Browse the server. Select Extended Objects in the Browse View and click Status. If done correctly, you should see the activation status on that server. See example below:


          DB2 Control Center on Linux shows blank window   
Man, I hadn’t posted anything in a while. Anyway, I was having trouble getting the DB2 Control Center (db2cc) to work on Ubuntu 8.04 (Hardy.) Whenever I tried to run db2cc all I was getting was a blank gray window … Continue reading
          DB2 on Ubuntu: “Database is damaged” error   
If you installed DB2 Express-C 9.5 on Ubuntu and can’t create any databases because of the following error: SQL1034C The database is damaged. All applications processing the database have been stopped. SQLSTATE=58031 Try adding the following to the /home/db2inst1/.bashrc file: … Continue reading
          How to Create a Tar File That Excludes Hidden Files and Folders   
This story has moved to NerdBoys.com. Please read this story at its new location.

          How to Use sudo tar in a Script Without Password Prompt   
This story has moved to NerdBoys.com. Please read this story at its new location.

          How to Find Out Which Version of Linux You Are Running   
This story has moved to NerdBoys.com. Please read this story at its new location.

          How to Copy a File and Get a Progress Bar, in Linux   
This story has moved to NerdBoys.com. Please read this story at its new location.

          Abiword 2.6.5   
Core Collaboration
  • Prevent a TLS tunnel from being created on the same port (Marc Maurer)
  • Prevent document frames from being hidden, which could result in AbiWord not exiting completely (Marc Maurer)
Editing/Layout
  • Bug 11307:  Arabic glyphs are not rendered correctly (Martin Sevior)
  • Bug 11636:  Bookmarks are not always inserted (sum1)
  • Bug 11665:  Text runs break within words (Martin Sevior)
  • Bug 11720:  Crash involving print preview and normal layout mode (Martin Sevior)
  • Bug 11773:  Crash when starting a list in an existing document (Martin Sevior)
GUI GTK+
  • Bug 10919:  AbiWord fails to import some PNG images (Jean Brefort)
  • Bug 10969:  AbiWord crashes when using the symbol dialog (Marc Maurer)
  • Bug 11539:  Fast scroll wheel usage does not change document scroll position (Martin Sevior)
  • Bug 11647:  AbiWord exits when overwriting an existing file in the save as dialog (Marc Maurer)
  • Bug 11709:  The title of the preferences dialog is not localized (Benno Schulenberg)
  • Bug 11731:  AbiWord crashes when opening the symbol dialog (Dominic Lachowicz, Marc Maurer)
  • Bug 11789:  Crash when opening the styles dialog on Ubuntu 8.10 (Alberto Milone)
  • Bug 11810:  No toolbar icons are displayed when AbiWord is run in certain locales (sum1)
  • Bug 495766 (Debian):  Crash when opening the font dialog (Marc Maurer, Xun Sun)
  • Fix a memory leak in the font dialog (Xun Sun)
  • Fix a start-up crash on Mac OS X (Marc Maurer)
  • Prevent a crash that could occur when automatically saving documents from multiple AbiWord windows (sum1)
Import/Export General
  • Improve error checking for memory stream inputs (Dominic Lachowicz)
  • Make --to-name no longer required for command line conversions (Dominic Lachowicz)
HTML
  • Fix a crash that would occur when exporting a text box with a missing wrap-mode property (sum1)
LaTeX
  • Clean up assorted code in the LaTeX exporter (Xun Sun)
  • Export MathML equations as LaTeX equations (Xun Sun)
  • Export PNG images (Xun Sun)
  • Fix a memory leak in the image export code (Xun Sun)
  • Handle merged table cells (Xun Sun)
  • Improve font size handling (Xun Sun)
  • Improve paper size support (Xun Sun)
  • Improve support for the default document language (Xun Sun)
  • Improve symbol font handling (Xun Sun)
  • Preserve the previous justification when dealing with footnotes (Xun Sun)
  • Require the fixltx2e package for subscript formatting (Xun Sun)
Office Open XML
  • Add an Office Open XML exporter (Firat Kiyak)
    • Features supported: formatted text, formatted paragraphs, page breaks, columns, tab stops, footnotes, endnotes, headers/footers, fields, images, hyperlinks, bookmarks, lists, tables, and text boxes
  • Handle a missing style type in the Office Open XML importer (sum1)
  • Ignore a missing paragraph alignment type in the Office Open XML importer (sum1)
OpenDocument
  • Bug 11790:  Crash when saving an imported OpenDocument file that has had rows added to it (sum1)
  • Prevent some potential crashes in the OpenDocument table export code (sum1)
OPML
  • Fix a crash that would occur when importing OPML files with no outline elements (sum1)
  • Improve the handling of OPML files that lack a body section (sum1)
RTF
  • Bug 11734:  Hyperlinked text is not imported correctly (Martin Sevior)
WML
  • Fix a crash that would occur when importing WML files with no paragraph elements (sum1)
  • Improve the handling of WML files that lack a card section (sum1)
XSL-FO
  • Escape block and span properties in the exporter (sum1)
Installer
  • Add mingwm10.dll to the installer (Ryan Pavlik)
Internationalization
  • Bug 11749:  Update the Basque (eu-ES) translation (Mikel Pascual)
  • Add Romanian (ro-RO) templates and system profiles (Lucian Constantin)
  • Update the Asturian (ast-ES) translation (Marcos Costales)
  • Update the Danish (da-DK) translation (Morten Juhl-Johansen Zölde-Fejér)
  • Update the Portuguese (pt-BR) translation (Daniel Oliveira Costa Lemos)
  • Update the Romanian (ro-RO) translation (Lucian Constantin)
  • Update the Slovak (sk-SK) translation (Jaroslav Rynik)
Development
  • Bug 11702:  Add RealmProtocol.h to source releases (Marc Maurer)
  • Bug 11797:  Makefile comment causes build failure (Hubert Figuiere)
  • Add a UT_colorToHex convenience function for color conversions (sum1)
  • Add abicollab_config.mk to source releases (Marc Maurer)
  • Fix the collaboration plug-in build on Windows (Marc Maurer)
  • Fix various warnings in the collaboration plug-in (Marc Maurer)
  • Maintain and release 2.6.5 (Marc Maurer)
  • Move the generic TLS tunnel code to the service back-end in the collaboration plug-in (Marc Maurer)
  • Remove some obsolete scripts (Marc Maurer)
  • Remove the Dashboard plug-in (Hubert Figuiere)
  • Remove the unmaintained Perl bindings (Marc Maurer)
  • Remove unused threading code (Marc Maurer)
  • Use a virtual destructor in the WordPerfect importer (Marc Maurer)

Download


          Bardziej czytelny dmesg   
P olecenie dmesg bardzo często jest używane przez administratorów do szukania problemów związanych z startem systemu, wykrywaniem sprzętu itd. Warto zwrócić uwagę, że od wersji 2.20 pakietu util-linux (Ubuntu 12.04) dostępnych jest kilka nowych opcji umożliwiających lepsze filtrowanie wyświetlanych informacji: – dodanie znaczników czasu: $ dmesg -T [ ... ] [Sun Dec 16 19:57:10 2012] […]
          Slackware 13.37 – /proc/sys/kernel/dmesg_restrict   
O d wersji jądra 2.6.37 wprowadzono mechanizm CONFIG_SECURITY_DMESG_RESTRICT (Restrict unprivileged access to the kernel syslog), który umożliwia określanie, czy zwykli użytkownicy systemu powinni mieć dostęp do polecenia dmesg służącego do wyświetlania informacji z dziennika bufora jądra. Jak zauważył Kees Cook z Ubuntu Security Team – syslog jądra (wspomniany dmesg[8]) pozostał jednym z ostatnich miejsc w […]
          0 - 10000   
1) google.com
2) facebook.com
3) youtube.com
4) yahoo.com
5) blogspot.com
6) baidu.com
7) wikipedia.org
8) live.com
9) twitter.com
10) qq.com
11) msn.com
12) yahoo.co.jp
13) linkedin.com
14) google.co.in
15) amazon.com
16) sina.com.cn
17) taobao.com
18) wordpress.com
19) google.com.hk
20) google.de
21) ebay.com
22) yandex.ru
23) google.co.uk
24) google.co.jp
25) bing.com
26) google.fr
27) 163.com
28) microsoft.com
29) weibo.com
30) paypal.com
31) google.com.br
32) flickr.com
33) mail.ru
34) apple.com
35) craigslist.org
36) fc2.com
37) googleusercontent.com
38) imdb.com
39) google.it
40) bbc.co.uk
41) google.ru
42) vkontakte.ru
43) sohu.com
44) tumblr.com
45) google.es
46) ask.com
47) livejasmin.com
48) xvideos.com
49) soso.com
50) youku.com
51) ifeng.com
52) go.com
53) cnn.com
54) bp.blogspot.com
55) google.com.mx
56) tudou.com
57) google.ca
58) aol.com
59) zedo.com
60) mediafire.com
61) xhamster.com
62) conduit.com
63) megaupload.com
64) godaddy.com
65) adobe.com
66) pornhub.com
67) google.co.id
68) about.com
69) alibaba.com
70) ameblo.jp
71) 4shared.com
72) ebay.de
73) espn.go.com
74) wordpress.org
75) livedoor.com
76) rakuten.co.jp
77) google.com.tr
78) google.com.au
79) youporn.com
80) babylon.com
81) uol.com.br
82) cnet.com
83) huffingtonpost.com
84) chinaz.com
85) livejournal.com
86) renren.com
87) thepiratebay.org
88) google.pl
89) ebay.co.uk
90) nytimes.com
91) t.co
92) amazon.de
93) alipay.com
94) tmall.com
95) imgur.com
96) dailymotion.com
97) myspace.com
98) cnzz.com
99) netflix.com
100) google.com.sa
101) odnoklassniki.ru
102) stumbleupon.com
103) badoo.com
104) globo.com
105) addthis.com
106) doubleclick.com
107) megaclick.com
108) twitpic.com
109) amazon.co.jp
110) secureserver.net
111) google.nl
112) douban.com
113) stackoverflow.com
114) orkut.com.br
115) dailymail.co.uk
116) orkut.com
117) weather.com
118) hao123.com
119) tianya.cn
120) tube8.com
121) reddit.com
122) goo.ne.jp
123) 360buy.com
124) sogou.com
125) pengyou.com
126) deviantart.com
127) vimeo.com
128) google.com.ar
129) imageshack.us
130) amazon.co.uk
131) photobucket.com
132) filestube.com
133) xnxx.com
134) warriorforum.com
135) fileserve.com
136) google.cn
137) redtube.com
138) 58.com
139) aweber.com
140) taringa.net
141) amazonaws.com
142) megavideo.com
143) torrentz.eu
144) google.co.th
145) google.com.pk
146) bankofamerica.com
147) spiegel.de
148) google.com.eg
149) sourceforge.net
150) xinhuanet.com
151) ehow.com
152) guardian.co.uk
153) clicksor.com
154) optmd.com
155) yfrog.com
156) nicovideo.jp
157) filesonic.com
158) digg.com
159) maktoob.com
160) mixi.jp
161) indiatimes.com
162) statcounter.com
163) fbcdn.net
164) rapidshare.com
165) rediff.com
166) foxnews.com
167) google.co.za
168) avg.com
169) download.com
170) ucoz.ru
171) ringtonepartner.com
172) adf.ly
173) yelp.com
174) liveinternet.ru
175) reference.com
176) rambler.ru
177) naver.com
178) booking.com
179) mashable.com
180) wikimedia.org
181) blogfa.com
182) etsy.com
183) ganji.com
184) reuters.com
185) yieldmanager.com
186) w3schools.com
187) zol.com.cn
188) chase.com
189) files.wordpress.com
190) onet.pl
191) youjizz.com
192) bild.de
193) wikia.com
194) ameba.jp
195) techcrunch.com
196) 56.com
197) answers.com
198) skype.com
199) domaintools.com
200) hotfile.com
201) kaixin001.com
202) terra.com.br
203) archive.org
204) clickbank.com
205) comcast.net
206) typepad.com
207) squidoo.com
208) salesforce.com
209) allegro.pl
210) wsj.com
211) google.com.my
212) digitalpoint.com
213) google.co.ve
214) free.fr
215) google.be
216) hootsuite.com
217) soufun.com
218) repubblica.it
219) telegraph.co.uk
220) xunlei.com
221) mywebsearch.com
222) qiyi.com
223) people.com.cn
224) soku.com
225) sparkstudios.com
226) walmart.com
227) orange.fr
228) wupload.com
229) google.com.co
230) hostgator.com
231) google.gr
232) leboncoin.fr
233) adultfriendfinder.com
234) scribd.com
235) china.com
236) php.net
237) tripadvisor.com
238) google.com.vn
239) espncricinfo.com
240) narod.ru
241) outbrain.com
242) wellsfargo.com
243) youdao.com
244) web.de
245) gmx.net
246) search-results.com
247) google.com.tw
248) hatena.ne.jp
249) linkwithin.com
250) tribalfusion.com
251) slideshare.net
252) 51.la
253) ezinearticles.com
254) libero.it
255) joomla.org
256) kaskus.us
257) hp.com
258) cam4.com
259) isohunt.com
260) netlog.com
261) themeforest.net
262) rutracker.org
263) dell.com
264) csdn.net
265) google.com.ua
266) twimg.com
267) 360.cn
268) cj.com
269) paipai.com
270) wp.pl
271) hulu.com
272) google.at
273) google.se
274) wretch.cc
275) plentyoffish.com
276) nifty.com
277) seesaa.net
278) tagged.com
279) fiverr.com
280) ning.com
281) mozilla.com
282) ikea.com
283) xing.com
284) google.ro
285) groupon.com
286) 2ch.net
287) pgmediaserve.com
288) homeway.com.cn
289) 10086.cn
290) mop.com
291) target.com
292) kat.ph
293) opendns.com
294) facemoods.com
295) in.com
296) constantcontact.com
297) ups.com
298) google.ch
299) daum.net
300) angege.com
301) icio.us
302) dropbox.com
303) 126.com
304) instagr.am
305) hudong.com
306) wordreference.com
307) google.com.ph
308) match.com
309) xe.com
310) searchqu.com
311) google.pt
312) dianping.com
313) usps.com
314) thefreedictionary.com
315) google.cl
316) goal.com
317) google.com.pe
318) marca.com
319) latimes.com
320) depositfiles.com
321) orkut.co.in
322) hubpages.com
323) pconline.com.cn
324) ig.com.br
325) ign.com
326) softonic.com
327) hardsextube.com
328) uimserv.net
329) biglobe.ne.jp
330) istockphoto.com
331) zimbio.com
332) yesky.com
333) ku6.com
334) mlb.com
335) corriere.it
336) bitly.com
337) google.com.ng
338) spankwire.com
339) w3.org
340) elpais.com
341) getfirebug.com
342) amazon.cn
343) bloomberg.com
344) seznam.cz
345) elance.com
346) washingtonpost.com
347) google.com.sg
348) ebay.it
349) metacafe.com
350) weebly.com
351) webs.com
352) histats.com
353) ynet.com
354) imagevenue.com
355) t-online.de
356) kooora.com
357) forbes.com
358) att.com
359) sakura.ne.jp
360) pandora.com
361) 51job.com
362) over-blog.com
363) businessinsider.com
364) kakaku.com
365) chinanews.com
366) mercadolivre.com.br
367) tmz.com
368) baixing.com
369) bestbuy.com
370) ebay.com.au
371) expedia.com
372) 4399.com
373) freelancer.com
374) samsung.com
375) google.co.kr
376) lockerz.com
377) soundcloud.com
378) android.com
379) mozilla.org
380) huanqiu.com
381) btjunkie.org
382) indeed.com
383) eastmoney.com
384) engadget.com
385) ero-advertising.com
386) milliyet.com.tr
387) google.ae
388) blog.163.com
389) nih.gov
390) rr.com
391) amung.us
392) admin5.com
393) partypoker.com
394) multiply.com
395) hurriyet.com.tr
396) alimama.com
397) love21cn.com
398) drupal.org
399) autohome.com.cn
400) gazeta.pl
401) people.com
402) elmundo.es
403) keezmovies.com
404) google.co.hu
405) rbc.ru
406) kuko115.com
407) vnexpress.net
408) softpedia.com
409) 39.net
410) mihanblog.com
411) comcast.com
412) leo.org
413) sitesell.com
414) feedburner.com
415) google.ie
416) drudgereport.com
417) livedoor.biz
418) basecamphq.com
419) lashou.com
420) youm7.com
421) mailchimp.com
422) gamespot.com
423) odesk.com
424) dmm.co.jp
425) commentcamarche.net
426) time.com
427) blackhatworld.com
428) bluehost.com
429) drtuber.com
430) informer.com
431) letitbit.net
432) tradedoubler.com
433) lenta.ru
434) mpnrs.com
435) americanexpress.com
436) myegy.com
437) cntv.cn
438) vmn.net
439) inetglobal.com
440) 51.com
441) google.dk
442) dangdang.com
443) jquery.com
444) fedex.com
445) camzap.com
446) aizhan.com
447) mynet.com
448) cnbc.com
449) pornhublive.com
450) ebay.fr
451) skyrock.com
452) thesun.co.uk
453) bigpoint.com
454) usatoday.com
455) vancl.com
456) tinypic.com
457) shutterstock.com
458) surveymonkey.com
459) naukri.com
460) zing.vn
461) geocities.jp
462) snapdeal.com
463) abcnews.go.com
464) sape.ru
465) google.no
466) meituan.com
467) mybrowserbar.com
468) xvideoslive.com
469) news.com.au
470) fatakat.com
471) zynga.com
472) neobux.com
473) discuz.net
474) slutload.com
475) miniclip.com
476) hc360.com
477) iloveyouxi.com
478) shareasale.com
479) gutefrage.net
480) qidian.com
481) justin.tv
482) meetup.com
483) exblog.jp
484) imesh.com
485) livingsocial.com
486) newegg.com
487) coupons.com
488) google.fi
489) google.cz
490) letmewatchthis.ch
491) verizonwireless.com
492) google.co.il
493) virgilio.it
494) way2sms.com
495) ya.ru
496) videobb.com
497) multiupload.com
498) aljazeera.net
499) seomoz.org
500) tweetmeme.com
501) gsmarena.com
502) pogo.com
503) duowan.com
504) mapquest.com
505) cocolog-nifty.com
506) pinterest.com
507) blackberry.com
508) altervista.org
509) posterous.com
510) ibm.com
511) extratorrent.com
512) asahi.com
513) careerbuilder.com
514) vk.com
515) tabelog.com
516) chip.de
517) ziddu.com
518) media.tumblr.com
519) monster.com
520) bitauto.com
521) swagbucks.com
522) exoplanetwar.com
523) wunderground.com
524) hdfcbank.com
525) foursquare.com
526) detik.com
527) tom.com
528) kinopoisk.ru
529) pchome.net
530) docin.com
531) verycd.com
532) brothersoft.com
533) github.com
534) zhaopin.com
535) sinaimg.cn
536) yomiuri.co.jp
537) mercadolibre.com.mx
538) hi5.com
539) demonoid.me
540) speedtest.net
541) wetter.com
542) wo.com.cn
543) immobilienscout24.de
544) peyvandha.ir
545) bearshare.com
546) marketwatch.com
547) oracle.com
548) gc.ca
549) hypergames.net
550) kijiji.ca
551) zillow.com
552) fotolia.com
553) yandex.ua
554) pptv.com
555) linksynergy.com
556) imagebam.com
557) ocn.ne.jp
558) beeg.com
559) xcar.com.cn
560) dmoz.org
561) irctc.co.in
562) battle.net
563) qip.ru
564) mobile.de
565) ovh.net
566) exoclick.com
567) amazon.fr
568) ustream.tv
569) abril.com.br
570) 115.com
571) hotels.com
572) who.is
573) am10.ru
574) nu.nl
575) macrumors.com
576) wix.com
577) habrahabr.ru
578) uploadstation.com
579) so-net.ne.jp
580) last.fm
581) grooveshark.com
582) allrecipes.com
583) lemonde.fr
584) cracked.com
585) disney.go.com
586) smashingmagazine.com
587) jugem.jp
588) templatemonster.com
589) oneindia.in
590) moneycontrol.com
591) cnblogs.com
592) cashtrafic.com
593) okcupid.com
594) jimdo.com
595) mercadolibre.com.ar
596) nextag.com
597) xtendmedia.com
598) letv.com
599) excite.co.jp
600) sitemeter.com
601) networkedblogs.com
602) appspot.com
603) tnaflix.com
604) webmd.com
605) mgid.com
606) anonym.to
607) clixsense.com
608) icontact.com
609) gotomeeting.com
610) tutsplus.com
611) softlayer.com
612) aliexpress.com
613) glispa.com
614) weather.com.cn
615) lequipe.fr
616) urbandictionary.com
617) priceline.com
618) cbsnews.com
619) formspring.me
620) gizmodo.com
621) traforet.ru
622) makepolo.com
623) qunar.com
624) yellowpages.com
625) force.com
626) verizon.com
627) getresponse.com
628) infolinks.com
629) qq937.com
630) as.com
631) 17kuxun.com
632) enterfactory.com
633) viadeo.com
634) ucoz.com
635) www.net.cn
636) flippa.com
637) oron.com
638) boston.com
639) dtiblog.com
640) blogimg.jp
641) nfl.com
642) pch.com
643) clickbank.net
644) persianblog.ir
645) admagnet.net
646) manta.com
647) capitalone.com
648) sulekha.com
649) google.co.ma
650) infusionsoft.com
651) timeanddate.com
652) rottentomatoes.com
653) whitepages.com
654) 4tube.com
655) sahibinden.com
656) gougou.com
657) mtv.com
658) paper.li
659) megaporn.com
660) tripod.com
661) mysql.com
662) pcpop.com
663) backpage.com
664) ibibo.com
665) warez-bb.org
666) bleacherreport.com
667) sponichi.co.jp
668) it168.com
669) retailmenot.com
670) theplanet.com
671) icicibank.com
672) lifehacker.com
673) 19lou.com
674) wired.com
675) focus.cn
676) sky.com
677) infoseek.co.jp
678) ekolay.net
679) ebay.in
680) enet.com.cn
681) google.sk
682) custhelp.com
683) scriptmafia.org
684) pixiv.net
685) varzesh3.com
686) seobook.com
687) com-net.info
688) 2345.com
689) atwiki.jp
690) xda-developers.com
691) vg.no
692) manzuo.com
693) webmoney.ru
694) allocine.fr
695) lzjl.com
696) itau.com.br
697) gismeteo.ru
698) webmasterworld.com
699) nasa.gov
700) nate.com
701) interia.pl
702) mtime.com
703) wikihow.com
704) realtor.com
705) sapo.pt
706) quikr.com
707) xtube.com
708) businessweek.com
709) hubspot.com
710) tuan800.com
711) searchengines.ru
712) sweetim.com
713) beemp3.com
714) arpg2.com
715) youboy.com
716) heise.de
717) issuu.com
718) ypmate.com
719) barnesandnoble.com
720) sanook.com
721) uploading.com
722) msn.ca
723) lefigaro.fr
724) dreamstime.com
725) accuweather.com
726) homedepot.com
727) ndtv.com
728) smh.com.au
729) foxsports.com
730) 17173.com
731) aftonbladet.se
732) kayak.com
733) 123rf.com
734) searchresultsdirect.com
735) putlocker.com
736) hyves.nl
737) babycenter.com
738) bodybuilding.com
739) radikal.ru
740) cmbchina.com
741) icbc.com.cn
742) alphaporno.com
743) filehippo.com
744) adult-empire.com
745) eluniversal.com.mx
746) overstock.com
747) inbox.com
748) dantri.com.vn
749) kompas.com
750) dyndns.org
751) telegraaf.nl
752) ca.gov
753) tuenti.com
754) elegantthemes.com
755) wiktionary.org
756) break.com
757) zhubajie.com
758) slickdeals.net
759) skysports.com
760) sfgate.com
761) hoopchina.com
762) nhk.or.jp
763) klout.com
764) songs.pk
765) 1und1.de
766) southwest.com
767) sfr.fr
768) ctrip.com
769) iminent.com
770) eyny.com
771) zanox-affiliate.de
772) onlinedown.net
773) ft.com
774) haberturk.com
775) howstuffworks.com
776) cnbeta.com
777) nk.pl
778) traidnt.net
779) orbitz.com
780) masrawy.com
781) freeones.com
782) myfreecams.com
783) google.co.nz
784) rtl.de
785) dict.cc
786) taleo.net
787) usbank.com
788) 7k7k.com
789) dealextreme.com
790) marktplaats.nl
791) pixnet.net
792) td.com
793) iteye.com
794) empflix.com
795) yiqifa.com
796) trulia.com
797) gap.com
798) yocc.net
799) 4chan.org
800) ahram.org.eg
801) magentocommerce.com
802) tf1.fr
803) linkbucks.com
804) goo.gl
805) alertpay.com
806) youjizzlive.com
807) seriesyonkis.com
808) asg.to
809) 178.com
810) me.com
811) opera.com
812) pornhost.com
813) sears.com
814) articlesbase.com
815) nikkei.com
816) metrolyrics.com
817) noaa.gov
818) trafficholder.com
819) picnik.com
820) rakuten.ne.jp
821) made-in-china.com
822) jeuxvideo.com
823) npr.org
824) zendesk.com
825) logmein.com
826) orf.at
827) okwave.jp
828) skycn.com
829) pagesjaunes.fr
830) google.com.kw
831) iciba.com
832) cbssports.com
833) examiner.com
834) tubegalore.com
835) namecheap.com
836) livescore.com
837) 9kele.com
838) discoverbing.com
839) pornerbros.com
840) nypost.com
841) java.com
842) liveperson.net
843) independent.co.uk
844) ninemsn.com.au
845) vivanews.com
846) icq.com
847) marketgid.com
848) pcworld.com
849) nokia.com
850) nipic.com
851) intuit.com
852) gazzetta.it
853) r7.com
854) welt.de
855) gamefaqs.com
856) zappos.com
857) ads8.com
858) justdial.com
859) bahn.de
860) pokerstrategy.com
861) perezhilton.com
862) google.lk
863) iconfinder.com
864) macys.com
865) google.bg
866) adultadworld.com
867) mail.com
868) askmen.com
869) idnes.cz
870) exbii.com
871) citibank.com
872) nextmedia.com
873) hsbc.co.uk
874) fastclick.com
875) compete.com
876) webhostingtalk.com
877) sueddeutsche.de
878) nydailynews.com
879) firstload.com
880) cbslocal.com
881) mcssl.com
882) kicker.de
883) lenovo.com
884) goodreads.com
885) wn.com
886) hostmonster.com
887) blogsky.com
888) earthlink.net
889) sxc.hu
890) dafont.com
891) mainichi.jp
892) tenpay.com
893) mangareader.net
894) google.az
895) 55tuan.com
896) m-w.com
897) foodnetwork.com
898) zoho.com
899) gumtree.com
900) uploaded.to
901) 24h.com.vn
902) google.com.qa
903) chinabroadcast.cn
904) novinky.cz
905) linternaute.com
906) groupon.de
907) httptrack.com
908) logsoku.com
909) woot.com
910) seowhy.com
911) acesse.com
912) mangafox.com
913) mediaset.it
914) mbc.net
915) forobeta.com
916) groupon.cn
917) hidemyass.com
918) sidereel.com
919) modelmayhem.com
920) ikariam.com
921) sitepoint.com
922) list-manage.com
923) adscale.de
924) cloob.com
925) ticketmaster.com
926) patch.com
927) lacaixa.es
928) clarin.com
929) ultimate-guitar.com
930) allabout.co.jp
931) linkhelper.cn
932) chinadaily.com.cn
933) movie2k.to
934) delta.com
935) gawker.com
936) adjuggler.net
937) giveawayoftheday.com
938) filecrop.com
939) brazzers.com
940) vistaprint.com
941) usmagazine.com
942) politico.com
943) cy-pr.com
944) damnlol.com
945) alice.it
946) vente-privee.com
947) discovery.com
948) intel.com
949) tiexue.net
950) technorati.com
951) meteofrance.com
952) icicibank.co.in
953) uuu9.com
954) 6.cn
955) networksolutions.com
956) tinyurl.com
957) auto.ru
958) codecanyon.net
959) shopathome.com
960) armorgames.com
961) nikkansports.com
962) nikkeibp.co.jp
963) pcmag.com
964) makeuseof.com
965) spotify.com
966) quora.com
967) apache.org
968) incredimail.com
969) google.com.do
970) fishki.net
971) 37see.com
972) ilmeteo.it
973) naver.jp
974) eventbrite.com
975) 78day.com
976) ibtimes.com
977) jalan.net
978) ip138.com
979) panoramio.com
980) norton.com
981) nationalgeographic.com
982) weiphone.com
983) docstoc.com
984) topix.com
985) plimus.com
986) streamate.com
987) tabnak.ir
988) hawaaworld.com
989) yoo7.com
990) onetad.com
991) postbank.de
992) lowes.com
993) musica.com
994) zazzle.com
995) google.kz
996) rightmove.co.uk
997) y8.com
998) markosweb.com
999) europa.eu
1000) searchengineland.com
1001) china.com.cn
1002) blackhatteam.com
1003) ct10000.com
1004) tistory.com
1005) aruba.it
1006) doctissimo.fr
1007) google.com.ec
1008) alot.com
1009) livestrong.com
1010) kongregate.com
1011) box.net
1012) stern.de
1013) fandango.com
1014) warriorplus.com
1015) rutube.ru
1016) craigslist.ca
1017) gamer.com.tw
1018) aibang.com
1019) xici.net
1020) noticias24.com
1021) ryanair.com
1022) 1and1.com
1023) focus.de
1024) klikbca.com
1025) fifa.com
1026) pantip.com
1027) worldstarhiphop.com
1028) failblog.org
1029) indianrail.gov.in
1030) yousendit.com
1031) buysellads.com
1032) idealo.de
1033) ria.ru
1034) staples.com
1035) jiathis.com
1036) indiamart.com
1037) dhgate.com
1038) chicagotribune.com
1039) xhamstercams.com
1040) cuevana.tv
1041) yam.com
1042) asos.com
1043) friendfeed.com
1044) travelocity.com
1045) yimg.com
1046) aufeminin.com
1047) gnavi.co.jp
1048) hh.ru
1049) sanspo.com
1050) domainsite.com
1051) groupon.com.br
1052) egotastic.com
1053) cookpad.com
1054) mangastream.com
1055) semrush.com
1056) blog.com
1057) 21cn.com
1058) gamezer.com
1059) impress.co.jp
1060) bet365.com
1061) bitsnoop.com
1062) niksalehi.com
1063) gstatic.com
1064) subscene.com
1065) livestream.com
1066) pog.com
1067) newsru.com
1068) jcpenney.com
1069) wikimapia.org
1070) tataindicom.com
1071) letsbonus.com
1072) wer-kennt-wen.de
1073) baomihua.com
1074) 888.com
1075) japanpost.jp
1076) veoh.com
1077) 24quan.com
1078) ynet.co.il
1079) 120ask.com
1080) arabseed.com
1081) fastpic.ru
1082) laredoute.fr
1083) subscribe.ru
1084) groupalia.com
1085) porntube.com
1086) bhphotovideo.com
1087) steampowered.com
1088) ebay.ca
1089) bigfishgames.com
1090) getclicky.com
1091) zdnet.com
1092) airtelforum.com
1093) cz.cc
1094) gazeta.ru
1095) pr-cy.ru
1096) alarabiya.net
1097) sabah.com.tr
1098) mylife.com
1099) groupon.co.uk
1100) groupon.it
1101) shaadi.com
1102) 5d6d.com
1103) slate.com
1104) woothemes.com
1105) weather.gov
1106) 52pk.net
1107) zedge.net
1108) merchantcircle.com
1109) costco.com
1110) ted.com
1111) hattrick.org
1112) chosun.com
1113) eonline.com
1114) multitran.ru
1115) kohls.com
1116) ebay.es
1117) celebuzz.com
1118) cinetube.es
1119) proboards.com
1120) buzzfeed.com
1121) state.gov
1122) yandex.net
1123) shinobi.jp
1124) commbank.com.au
1125) ew.com
1126) liveleak.com
1127) flipkart.com
1128) mamba.ru
1129) zanox.com
1130) myp2p.eu
1131) eqla3.com
1132) virtapay.com
1133) forosdelweb.com
1134) teacup.com
1135) tgbus.com
1136) brainyquote.com
1137) bizrate.com
1138) subito.it
1139) city-data.com
1140) cox.net
1141) poste.it
1142) makemytrip.com
1143) ancestry.com
1144) novamov.com
1145) freakshare.com
1146) buzznet.com
1147) asp.net
1148) ping.fm
1149) bidorbuy.co.za
1150) mercadolibre.com.ve
1151) timesjobs.com
1152) 1717388.com
1153) avito.ru
1154) deutsche-bank.de
1155) 01net.com
1156) tube8live.com
1157) donews.com
1158) htc.com
1159) sify.com
1160) barclays.co.uk
1161) shopping.com
1162) xbox.com
1163) lycos.com
1164) buscape.com.br
1165) boursorama.com
1166) 51cto.com
1167) pingomatic.com
1168) quantcast.com
1169) direct.gov.uk
1170) userporn.com
1171) sabq.org
1172) mp3raid.com
1173) google.hr
1174) detiknews.com
1175) free-lance.ru
1176) kitco.com
1177) lanacion.com.ar
1178) easyhits4u.com
1179) funshion.com
1180) pixmania.com
1181) mobile9.com
1182) ggpht.com
1183) msn.com.cn
1184) jiji.com
1185) sedo.com
1186) 91mangrandi.com
1187) autotrader.com
1188) t-mobile.com
1189) ing.nl
1190) citysearch.com
1191) mihandownload.com
1192) vesti.ru
1193) r10.net
1194) joy.cn
1195) startimes.com
1196) infobae.com
1197) cisco.com
1198) sunporno.com
1199) ip-adress.com
1200) seekingalpha.com
1201) simplyhired.com
1202) dynamicdrive.com
1203) onlinesbi.com
1204) yihaodian.com
1205) payserve.com
1206) ozon.ru
1207) prchecker.info
1208) citrixonline.com
1209) zaobao.com
1210) elcomercio.pe
1211) yaplog.jp
1212) dagbladet.no
1213) en.wordpress.com
1214) programme-tv.net
1215) aa.com
1216) twcczhu.com
1217) ulink.cc
1218) dpreview.com
1219) mediatakeout.com
1220) partycasino.com
1221) hilton.com
1222) copyscape.com
1223) avast.com
1224) realitykings.com
1225) yandex.kz
1226) 86mmo.com
1227) mohegunsun.com
1228) fling.com
1229) extremetube.com
1230) rk.com
1231) perfectgirls.net
1232) xkcd.com
1233) plala.or.jp
1234) dl4all.com
1235) sendspace.com
1236) itmedia.co.jp
1237) ansa.it
1238) sdo.com
1239) google.lt
1240) reliancenetconnect.co.in
1241) thechive.com
1242) opensiteexplorer.org
1243) hinet.net
1244) tdcanadatrust.com
1245) shangdu.com
1246) nuomi.com
1247) avaxhome.ws
1248) keepvid.com
1249) groupon.ru
1250) kioskea.net
1251) blinkx.com
1252) marriott.com
1253) mercadolibre.com
1254) ebuddy.com
1255) yoka.com
1256) yallakora.com
1257) tiscali.it
1258) whois.net
1259) m5zn.com
1260) smowtion.com
1261) jma.go.jp
1262) playstation.com
1263) babytree.com
1264) bidvertiser.com
1265) newsmax.com
1266) pchome.com.tw
1267) instructables.com
1268) 88db.com
1269) google.by
1270) mazika2day.com
1271) 1stwebdesigner.com
1272) wahoha.com
1273) tomshardware.com
1274) eastday.com
1275) easyjet.com
1276) fixya.com
1277) jrj.com.cn
1278) panet.co.il
1279) 9gag.com
1280) oricon.co.jp
1281) affili.net
1282) ccb.com
1283) zoosk.com
1284) wowhead.com
1285) bigresource.com
1286) bookryanair.com
1287) yahoo-mbga.jp
1288) thepostgame.com
1289) nordstrom.com
1290) shufuni.com
1291) udn.com
1292) news24.com
1293) cyworld.com
1294) pornoxo.com
1295) medicinenet.com
1296) ea.com
1297) behance.net
1298) sprint.com
1299) ilivid.com
1300) poringa.net
1301) kp.ru
1302) googlelabs.com
1303) voyages-sncf.com
1304) edeng.cn
1305) moneybookers.com
1306) zwaar.net
1307) no-ip.com
1308) video2mp3.net
1309) techweb.com.cn
1310) 1o26.com
1311) olx.in
1312) collegehumor.com
1313) biblegateway.com
1314) disqus.com
1315) lonelyplanet.com
1316) diigo.com
1317) theweathernetwork.com
1318) vagos.es
1319) grepolis.com
1320) tripadvisor.co.uk
1321) legacy.com
1322) easy-share.com
1323) ubuntuforums.org
1324) bitshare.com
1325) 360doc.com
1326) fidelity.com
1327) template-help.com
1328) fanfiction.net
1329) chinabyte.com
1330) leagueoflegends.com
1331) zshare.net
1332) univision.com
1333) mbank.com.pl
1334) xyxy.net
1335) iltalehti.fi
1336) netvibes.com
1337) pho.to
1338) toysrus.com
1339) thenextweb.com
1340) lastminute.com
1341) ksl.com
1342) myyearbook.com
1343) n-tv.de
1344) met-art.com
1345) tut.by
1346) speedbit.com
1347) blocket.se
1348) ioffer.com
1349) cafemom.com
1350) pole-emploi.fr
1351) piriform.com
1352) google.iq
1353) girlsgogames.com
1354) bbb.org
1355) farsnews.com
1356) newgrounds.com
1357) cbc.ca
1358) priceminister.com
1359) virginmedia.com
1360) ovi.com
1361) bookofsex.com
1362) tigerdirect.com
1363) gumtree.co.za
1364) boc.cn
1365) thedailybeast.com
1366) monsterindia.com
1367) videobash.com
1368) esmas.com
1369) victoriassecret.com
1370) premierleague.com
1371) akhbarak.net
1372) ole.com.ar
1373) betfair.com
1374) screencast.com
1375) ad6media.fr
1376) daniweb.com
1377) szn.cz
1378) fanpop.com
1379) slashdot.org
1380) shoplocal.com
1381) reverso.net
1382) iza.ne.jp
1383) ubuntu.com
1384) finn.no
1385) ppstream.com
1386) 91.com
1387) tesco.com
1388) sport.es
1389) natwest.com
1390) sp.gov.br
1391) hotwire.com
1392) vid2c.com
1393) stanford.edu
1394) onbux.com
1395) codeproject.com
1396) nike.com
1397) google.com.bd
1398) 2leep.com
1399) webry.info
1400) abc.net.au
1401) cleartrip.com
1402) discovercard.com
1403) shop-pro.jp
1404) whirlpool.net.au
1405) microsoftonline.com
1406) persianv.com
1407) mobile01.com
1408) mediaplex.com
1409) duote.com
1410) labnol.org
1411) asus.com
1412) videozer.com
1413) 000webhost.com
1414) xiami.com
1415) bharatstudent.com
1416) utorrent.com
1417) associatedcontent.com
1418) bhaskar.com
1419) directadvert.ru
1420) gfan.com
1421) tagesschau.de
1422) mit.edu
1423) talkfusion.com
1424) mts.ru
1425) autoscout24.de
1426) wayn.com
1427) dospy.com
1428) gmw.cn
1429) downloadhelper.net
1430) weblio.jp
1431) tvguide.com
1432) economist.com
1433) incrasebux.com
1434) overthumbs.com
1435) cbs.com
1436) hm.com
1437) cafepress.com
1438) rednet.cn
1439) turbobit.net
1440) 27.cn
1441) plurk.com
1442) accountonline.com
1443) walgreens.com
1444) gayromeo.com
1445) smartresponder.ru
1446) bramjnet.com
1447) zap2it.com
1448) zcool.com.cn
1449) studiopress.com
1450) ap.org
1451) chefkoch.de
1452) angelfire.com
1453) theglobeandmail.com
1454) mp3skull.com
1455) 20minutos.es
1456) pornorama.com
1457) canalblog.com
1458) 2chblog.jp
1459) index.hu
1460) digitalprose.com
1461) dhl.de
1462) clubic.com
1463) adriver.ru
1464) argos.co.uk
1465) globe7.com
1466) google.co.ke
1467) telecomitalia.it
1468) eztv.it
1469) wmtransfer.com
1470) videosurf.com
1471) yatra.com
1472) debonairblog.com
1473) mayoclinic.com
1474) foxtab.com
1475) sony.com
1476) phpwind.net
1477) moneysavingexpert.com
1478) discuss.com.hk
1479) google.com.om
1480) pixhost.org
1481) ftuan.com
1482) addictinggames.com
1483) moviefone.com
1484) qingdaonews.com
1485) stackexchange.com
1486) skyscrapercity.com
1487) zaycev.net
1488) yolasite.com
1489) prestashop.com
1490) forumcommunity.net
1491) pcauto.com.cn
1492) tokobagus.com
1493) club-asteria.com
1494) tebyan.net
1495) hotpepper.jp
1496) xxxmatch.com
1497) detiksport.com
1498) digitalmarketer.com
1499) daqi.com
1500) realclearpolitics.com
1501) theage.com.au
1502) airbnb.com
1503) gogetlinks.net
1504) deezer.com
1505) cnfol.com
1506) ebookee.org
1507) bodisparking.com
1508) wetteronline.de
1509) rockettheme.com
1510) buzzle.com
1511) mundoanuncio.com
1512) blogmura.com
1513) googlesyndication.com
1514) sedoparking.com
1515) submarino.com.br
1516) sport1.de
1517) buy.com
1518) heyos.com
1519) linezing.com
1520) berkeley.edu
1521) cdiscount.com
1522) mthai.com
1523) carview.co.jp
1524) securepaynet.net
1525) ddmap.com
1526) azlyrics.com
1527) khabaronline.ir
1528) google.tn
1529) adbrite.com
1530) adsense-id.com
1531) forumfree.it
1532) sanjesh.org
1533) irs.gov
1534) css-tricks.com
1535) mercola.com
1536) transfermarkt.de
1537) advertstream.com
1538) dion.ne.jp
1539) jang.com.pk
1540) evernote.com
1541) ovh.com
1542) stardoll.com
1543) ce.cn
1544) goalunited.org
1545) autotrader.co.uk
1546) haodf.com
1547) nymag.com
1548) juicyads.com
1549) trialpay.com
1550) sulit.com.ph
1551) ilfattoquotidiano.it
1552) tv.com
1553) icson.com
1554) centrum.cz
1555) addtoany.com
1556) echo.msk.ru
1557) derstandard.at
1558) apserver.net
1559) real.com
1560) google.com.ly
1561) rackspace.com
1562) americanas.com.br
1563) metafilter.com
1564) ligatus.com
1565) woorank.com
1566) tineye.com
1567) jsoftj.com
1568) freeporn.com
1569) couchsurfing.org
1570) 3992929.com
1571) fatwallet.com
1572) yandex.by
1573) otto.de
1574) mapion.co.jp
1575) authorize.net
1576) nbcsports.com
1577) seitwert.de
1578) google.com.gt
1579) mydala.com
1580) ruten.com.tw
1581) o2.pl
1582) venere.com
1583) hongkiat.com
1584) national-lottery.co.uk
1585) donanimhaber.com
1586) hangame.co.jp
1587) oyunlar1.com
1588) gd118114.cn
1589) deviantclip.com
1590) leparisien.fr
1591) wimp.com
1592) cloudfront.net
1593) forexfactory.com
1594) msn.co.jp
1595) dpstream.net
1596) cnr.cn
1597) zalando.de
1598) santabanta.com
1599) e-hentai.org
1600) pudelek.pl
1601) drom.ru
1602) attachmail.ru
1603) 55bbs.com
1604) dreamhost.com
1605) advfn.com
1606) clickindia.com
1607) gravatar.com
1608) trademe.co.nz
1609) searchina.ne.jp
1610) google.si
1611) reverbnation.com
1612) cpanel.net
1613) jappy.de
1614) zeit.de
1615) gametrailers.com
1616) zjol.com.cn
1617) fnac.com
1618) correios.com.br
1619) rivals.com
1620) seopult.ru
1621) ycombinator.com
1622) filesonic.in
1623) zhenai.com
1624) urbanspoon.com
1625) echoroukonline.com
1626) wer-weiss-was.de
1627) feedsportal.com
1628) otomoto.pl
1629) rlslog.net
1630) theatlantic.com
1631) filesonic.jp
1632) strato.de
1633) thinkexist.com
1634) caixa.gov.br
1635) widgeo.net
1636) eltiempo.com
1637) nairaland.com
1638) sports.ru
1639) justbeenpaid.com
1640) motherless.com
1641) mexat.com
1642) hiapk.com
1643) rtve.es
1644) ex.ua
1645) artlebedev.ru
1646) west263.com
1647) bt.com
1648) dict.cn
1649) blogbus.com
1650) anz.com
1651) airtel.in
1652) literotica.com
1653) edmunds.com
1654) donga.com
1655) google.co.cr
1656) moneymakerdiscussion.com
1657) super.cz
1658) majesticseo.com
1659) ilsole24ore.com
1660) teamviewer.com
1661) seoquake.com
1662) ems.com.cn
1663) harvard.edu
1664) comdirect.de
1665) wwe.com
1666) hindustantimes.com
1667) mcafee.com
1668) tiu.ru
1669) polyvore.com
1670) webex.com
1671) hulkshare.com
1672) free-tv-video-online.me
1673) heroturko.com
1674) cartoonnetwork.com
1675) yr.no
1676) feedjit.com
1677) sonico.com
1678) privalia.com
1679) ime.nu
1680) xpg.com.br
1681) speakasiaonline.com
1682) thehindu.com
1683) meebo.com
1684) xinmin.cn
1685) faz.net
1686) vevo.com
1687) alisoft.com
1688) trenitalia.com
1689) blogs.com
1690) streamiz.com
1691) justhost.com
1692) fujitv.co.jp
1693) daily.co.jp
1694) biglion.ru
1695) superpages.com
1696) hostnoc.net
1697) gittigidiyor.com
1698) penguinvids.com
1699) 96pk.com
1700) xi666.com
1701) meinvz.net
1702) whatismyipaddress.com
1703) pornbanana.com
1704) autoblog.com
1705) onlylady.com
1706) symantec.com
1707) meneame.net
1708) vatgia.com
1709) 47news.jp
1710) xxxbunker.com
1711) mp-success.com
1712) studiverzeichnis.com
1713) hamusoku.com
1714) sing365.com
1715) tv-links.eu
1716) charter.net
1717) vietnamnet.vn
1718) squarespace.com
1719) pingdom.com
1720) oanda.com
1721) ebaumsworld.com
1722) gaopeng.com
1723) 20minutes.fr
1724) anjuke.com
1725) goldporntube.com
1726) aebn.net
1727) estadao.com.br
1728) siteground.com
1729) libertyreserve.com
1730) w3school.com.cn
1731) porn.com
1732) championat.com
1733) starwoodhotels.com
1734) bestcoolmobile.com
1735) nick.com
1736) sympatico.ca
1737) fotostrana.ru
1738) local.com
1739) nownews.com
1740) ldblog.jp
1741) kotaku.com
1742) iij4u.or.jp
1743) smugmug.com
1744) discogs.com
1745) bdr130.net
1746) groupon.fr
1747) blogcu.com
1748) tweetdeck.com
1749) hornymatches.com
1750) united.com
1751) extremetracking.com
1752) kdnet.net
1753) investopedia.com
1754) kwejk.pl
1755) ashemaletube.com
1756) problogger.net
1757) bmi.ir
1758) nouvelobs.com
1759) cox.com
1760) te3p.com
1761) unam.mx
1762) agame.com
1763) imagefap.com
1764) jpmp3.com
1765) voila.fr
1766) pastebin.com
1767) safecheckpoint.net
1768) ename.cn
1769) ci123.com
1770) benisonapparel.com
1771) tizag.com
1772) manhunt.net
1773) blog.hu
1774) jinti.com
1775) optimum.net
1776) sourtimes.org
1777) gigaom.com
1778) cnxad.com
1779) pipl.com
1780) webgozar.com
1781) gongchang.com
1782) lynda.com
1783) vanguardngr.com
1784) filefactory.com
1785) 114la.com
1786) salon.com
1787) lightinthebox.com
1788) alc.co.jp
1789) ifile.it
1790) liutilities.com
1791) dedecms.com
1792) information.com
1793) etao.com
1794) king.com
1795) prlog.org
1796) lloydstsb.co.uk
1797) mediotiempo.com
1798) qvc.com
1799) boxofficemojo.com
1800) arstechnica.com
1801) demotywatory.pl
1802) goldenline.pl
1803) yaolan.com
1804) online.sh.cn
1805) uwants.com
1806) infinitybux.com
1807) xilu.com
1808) pclady.com.cn
1809) diary.ru
1810) usaa.com
1811) video-one.com
1812) gigazine.net
1813) citibank.co.in
1814) argentinawarez.com
1815) paisalive.com
1816) friv.com
1817) aftenposten.no
1818) aboutus.org
1819) bankmellat.ir
1820) play.com
1821) manager.co.th
1822) lego.com
1823) sedty.com
1824) bookmyshow.com
1825) fool.com
1826) crunchbase.com
1827) imagetwist.com
1828) graphicriver.net
1829) clubpenguin.com
1830) marketgid.info
1831) vatanim.com.tr
1832) eenadu.net
1833) graaam.com
1834) flirt4free.com
1835) partypoker.it
1836) searchdiscovered.com
1837) funnyordie.com
1838) dipan.com
1839) travelzoo.com
1840) 01hr.com
1841) jobsdb.com
1842) jetblue.com
1843) your-server.de
1844) buyvip.com
1845) fastbrowsersearch.com
1846) prweb.com
1847) airasia.com
1848) popads.net
1849) xml-sitemaps.com
1850) prnewswire.com
1851) nba.com
1852) ngoisao.net
1853) xl.pt
1854) techrepublic.com
1855) 53kf.com
1856) flingvibe.com
1857) getiton.com
1858) hespress.com
1859) g9g.com
1860) vodafone.it
1861) holidaycheck.de
1862) izlesene.com
1863) madthumbs.com
1864) bankrate.com
1865) fotolog.net
1866) i.ua
1867) suning.com
1868) boardreader.com
1869) livehotty.com
1870) bb.com.br
1871) rabobank.nl
1872) abc.go.com
1873) entrepreneur.com
1874) qype.com
1875) sonyericsson.com
1876) woch.com
1877) toptenreviews.com
1878) websitewelcome.com
1879) 1ting.com
1880) petardas.com
1881) sub.jp
1882) a8.net
1883) jp-sex.com
1884) scout.com
1885) rt.com
1886) nic.ru
1887) cqtiyu.com
1888) lexpress.fr
1889) voanews.com
1890) sport.pl
1891) shutterfly.com
1892) html.it
1893) z5x.net
1894) androidforums.com
1895) forocoches.com
1896) omniture.com
1897) wmmail.ru
1898) jqueryui.com
1899) chitika.com
1900) ixbt.com
1901) instaforex.com
1902) wjunction.com
1903) meishichina.com
1904) infowars.com
1905) manutd.com
1906) myvideo.de
1907) crsky.com
1908) safe-swaps.com
1909) toocle.com
1910) imageporter.com
1911) unkar.org
1912) evite.com
1913) searchenginewatch.com
1914) realestate.com.au
1915) pichunter.com
1916) fox.com
1917) mirtesen.ru
1918) brg8.com
1919) kapook.com
1920) 99designs.com
1921) fastcompany.com
1922) gilt.com
1923) dinamalar.com
1924) bancobrasil.com.br
1925) zhihu.com
1926) 17u.cn
1927) 6park.com
1928) westpac.com.au
1929) prav.tv
1930) planetsuzy.org
1931) erepublik.com
1932) nickjr.com
1933) say-move.org
1934) caribbeancom.com
1935) commissiondomination.com
1936) surveyspaid.com
1937) thestreet.com
1938) ukr.net
1939) garmin.com
1940) etrade.com
1941) xuite.net
1942) tvn24.pl
1943) boygj.com
1944) poco.cn
1945) cars.com
1946) billdesk.com
1947) rutor.org
1948) yuku.com
1949) java2s.com
1950) sciencedirect.com
1951) efukt.com
1952) readwriteweb.com
1953) credit-agricole.fr
1954) kickstarter.com
1955) travian.com
1956) zdf.de
1957) peeplo.com
1958) zozo.jp
1959) unaico.com
1960) weheartit.com
1961) flixya.com
1962) techtudo.com.br
1963) justanswer.com
1964) forums.wordpress.com
1965) superjob.ru
1966) dbank.com
1967) nyaa.eu
1968) iltasanomat.fi
1969) meinestadt.de
1970) washingtontimes.com
1971) bradesco.com.br
1972) downloadweb.org
1973) abc.es
1974) adam4adam.com
1975) okezone.com
1976) xdating.com
1977) kinox.to
1978) pornolab.net
1979) im286.com
1980) theoatmeal.com
1981) 3366.com
1982) webrankinfo.com
1983) chroniccommissions.com
1984) freeonlinegames.com
1985) travian.ir
1986) xmarks.com
1987) gob.ve
1988) mirror.co.uk
1989) oschina.net
1990) eversave.com
1991) care2.com
1992) wisegeek.com
1993) kbb.com
1994) iwebtool.com
1995) 60photos.com
1996) hollywoodreporter.com
1997) handelsblatt.com
1998) walla.co.il
1999) zippyshare.com
2000) buienradar.nl
2001) alriyadh.com
2002) milenio.com
2003) liverpoolfc.tv
2004) google.com.pr
2005) internetdownloadmanager.com
2006) shopzilla.com
2007) surveyrouter.com
2008) newsnow.co.uk
2009) abola.pt
2010) resellerclub.com
2011) moonbasa.com
2012) pandora.tv
2013) vnet.cn
2014) theblaze.com
2015) incomehybrid.com
2016) boingboing.net
2017) socialmediaexaminer.com
2018) hotmail.com
2019) guiaconsumidor.com
2020) ed.gov
2021) sponsoredreviews.com
2022) wincoremarketing.com
2023) bnpparibas.net
2024) lurkmore.ru
2025) theonion.com
2026) forumactif.com
2027) alfalfalfa.com
2028) naughtyamerica.com
2029) joomlart.com
2030) with2.net
2031) twiends.com
2032) expressen.se
2033) garanti.com.tr
2034) ow.ly
2035) sbrf.ru
2036) dilandau.eu
2037) linuxquestions.org
2038) topsy.com
2039) travian.com.sa
2040) wikispaces.com
2041) torrent411.com
2042) myorderbox.com
2043) logitech.com
2044) my-hit.ru
2045) watchseries.eu
2046) newegg.com.cn
2047) nrk.no
2048) goarticles.com
2049) eluniversal.com
2050) quepasa.com
2051) backlinkwatch.com
2052) gamestop.com
2053) menshealth.com
2054) macworld.com
2055) getsatisfaction.com
2056) dmm.com
2057) eharmony.com
2058) wordtracker.com
2059) benaughty.com
2060) bedbathandbeyond.com
2061) imvu.com
2062) kaspersky.com
2063) prizee.com
2064) pbskids.org
2065) tripadvisor.it
2066) joinsmsn.com
2067) ahlamontada.com
2068) gather.com
2069) ceneo.pl
2070) nabble.com
2071) xmbs.jp
2072) foundationapi.com
2073) cooks.com
2074) lyricsmode.com
2075) babble.com
2076) itrack.it
2077) ba-k.com
2078) kelkoo.com
2079) suntimes.com
2080) wat.tv
2081) lastampa.it
2082) m18.com
2083) techarena.in
2084) abnamro.nl
2085) britishairways.com
2086) scottrade.com
2087) sinaapp.com
2088) xrea.com
2089) bizjournals.com
2090) ipage.com
2091) clickbooth.com
2092) peixeurbano.com.br
2093) yuvutu.com
2094) thesuperficial.com
2095) boerse.bz
2096) vivastreet.fr
2097) azet.sk
2098) translate.ru
2099) codeplex.com
2100) express.com.pk
2101) gamewan.net
2102) chinahr.com
2103) overture.com
2104) netfirms.com
2105) cvs.com
2106) 17u.com
2107) webpagetest.org
2108) mudah.my
2109) infojobs.net
2110) mobifiesta.com
2111) ekstrabladet.dk
2112) axisbank.co.in
2113) miralinks.ru
2114) hotscripts.com
2115) microsofttranslator.com
2116) ad1111.com
2117) hinews.cn
2118) sina.com
2119) tokyo.jp
2120) images-amazon.com
2121) pornbb.org
2122) laposte.net
2123) lloydstsb.com
2124) wickedfire.com
2125) yahoo.com.cn
2126) kinozal.tv
2127) bash.org.ru
2128) stooorage.com
2129) parallels.com
2130) mobileraffles.com
2131) uuzu.com
2132) sixrevisions.com
2133) vador.com
2134) monova.org
2135) abchina.com
2136) members.webs.com
2137) thestar.com
2138) persiangig.com
2139) find-fast-answers.com
2140) auctiva.com
2141) spokeo.com
2142) segundamano.es
2143) sharethis.com
2144) zara.com
2145) agoda.com
2146) redbox.com
2147) 1saleaday.com
2148) rueducommerce.fr
2149) foxtv.es
2150) oodle.com
2151) imlive.com
2152) comedycentral.com
2153) netteller.com
2154) axisbank.com
2155) shopstyle.com
2156) continental.com
2157) home.pl
2158) googlecode.com
2159) fobshanghai.com
2160) tuttomercatoweb.com
2161) ashleymadison.com
2162) ivillage.com
2163) mcanime.net
2164) nwolb.com
2165) aeriagames.com
2166) paypal.de
2167) zongheng.com
2168) juegos.com
2169) cduniverse.com
2170) maybank2u.com.my
2171) citi.com
2172) epinions.com
2173) dumpert.nl
2174) twitlonger.com
2175) novoteka.ru
2176) eskimotube.com
2177) mufg.jp
2178) finanzen.net
2179) homestead.com
2180) webstatschecker.com
2181) zerohedge.com
2182) inc.com
2183) nudevista.com
2184) myntra.com
2185) nwsource.com
2186) voyeurweb.com
2187) state.tx.us
2188) ifensi.com
2189) filmweb.pl
2190) tfl.gov.uk
2191) yootheme.com
2192) caisse-epargne.fr
2193) tbs.co.jp
2194) wumii.com
2195) ngacn.cc
2196) arabseyes.com
2197) 1337x.org
2198) locaweb.com.br
2199) jigsaw.com
2200) digitalspy.co.uk
2201) liberation.fr
2202) royalmail.com
2203) usafis.org
2204) 500px.com
2205) ovguide.com
2206) metro.co.uk
2207) dhl.com
2208) standardbank.co.za
2209) dayoo.com
2210) sodahead.com
2211) torrenthound.com
2212) registro.br
2213) blueidea.com
2214) twitterfeed.com
2215) hotsales.net
2216) mirrorcreator.com
2217) yhchuanqi.com
2218) home.ne.jp
2219) imanhua.com
2220) newsvine.com
2221) ycasmd.info
2222) stubhub.com
2223) filgoal.com
2224) mail2web.com
2225) googleblog.blogspot.com
2226) cncn.com
2227) gulli.com
2228) kmart.com
2229) techradar.com
2230) peliculasyonkis.com
2231) iwiw.hu
2232) officedepot.com
2233) lapatilla.com
2234) sockshare.com
2235) mundodeportivo.com
2236) warnerbros.com
2237) beeline.ru
2238) zpag.es
2239) bbc.com
2240) chacha.com
2241) chinaunix.net
2242) seemorgh.com
2243) haber7.com
2244) hsw.cn
2245) adslgate.com
2246) free-press-release.com
2247) directv.com
2248) suite101.com
2249) fantasti.cc
2250) booksky.org
2251) 5pk.com
2252) globaltestmarket.com
2253) sakshi.com
2254) ajc.com
2255) china.cn
2256) enfemenino.com
2257) admob.com
2258) reliancebroadband.co.in
2259) explosm.net
2260) jobstreet.com
2261) 99inf.com
2262) forever21.com
2263) sparkpeople.com
2264) mint.com
2265) sznews.com
2266) elkhabar.com
2267) bravotube.net
2268) food.com
2269) fashionandyou.com
2270) nowec.com
2271) national.com.au
2272) computing.net
2273) kaboodle.com
2274) plotek.pl
2275) postimage.org
2276) tfile.ru
2277) vector.co.jp
2278) foreningssparbanken.se
2279) codingforums.com
2280) templatic.com
2281) kinghost.com
2282) alfemminile.com
2283) yinyuetai.com
2284) outlook.com
2285) eroprofile.com
2286) lg.com
2287) xinnet.com
2288) x-art.com
2289) forumotion.com
2290) clicksia.com
2291) livetv.ru
2292) milanuncios.com
2293) noupe.com
2294) glassdoor.com
2295) 2checkout.com
2296) cpz.to
2297) irr.ru
2298) mywot.com
2299) onsugar.com
2300) redfin.com
2301) drugs.com
2302) homeshop18.com
2303) 10010.com
2304) gfxtra.com
2305) chomikuj.pl
2306) royalbank.com
2307) longtailvideo.com
2308) computerbild.de
2309) support.wordpress.com
2310) islamweb.net
2311) joystiq.com
2312) yaplakal.com
2313) yasni.de
2314) dribbble.com
2315) torrentreactor.net
2316) hsbc.com
2317) hepsiburada.com
2318) qianlong.com
2319) cheetahmail.com
2320) letour.fr
2321) adserverpub.com
2322) google.com.sv
2323) eb80.com
2324) bankmandiri.co.id
2325) gettyimages.com
2326) bollywoodhungama.com
2327) freshbooks.com
2328) videosz.com
2329) pcgames.com.cn
2330) trend.az
2331) 51yes.com
2332) w3support.net
2333) nnm.ru
2334) mn66.com
2335) lufthansa.com
2336) sme.sk
2337) dw-world.de
2338) miibeian.gov.cn
2339) directtrack.com
2340) prothom-alo.com
2341) diythemes.com
2342) e1.ru
2343) wachovia.com
2344) smotri.com
2345) blog.me
2346) loveplanet.ru
2347) findicons.com
2348) abv.bg
2349) geocities.co.jp
2350) cheshi.com.cn
2351) ivi.ru
2352) gree.jp
2353) haolaba.com
2354) omgpm.com
2355) 114so.cn
2356) mycom.co.jp
2357) creativecommons.org
2358) all.biz
2359) nos.nl
2360) google.com.uy
2361) storesonlinepro.com
2362) dostor.org
2363) fnb.co.za
2364) howtogeek.com
2365) dn.se
2366) ec21.com
2367) meilishuo.com
2368) shorouknews.com
2369) mk.co.kr
2370) whatismyip.com
2371) bit.ly
2372) exploader.net
2373) opentable.com
2374) elong.com
2375) amazon.it
2376) lds.org
2377) h2porn.com
2378) eurosport.fr
2379) evergreenbusinesssystem.com
2380) wenxuecity.com
2381) chess.com
2382) lun.com
2383) orange.co.uk
2384) everydayhealth.com
2385) himado.in
2386) vagalume.com.br
2387) hsbc.com.hk
2388) cheapoair.com
2389) abovethematrix.com
2390) brighthub.com
2391) badjojo.com
2392) alwafd.org
2393) dualmarket.info
2394) terra.es
2395) ana.co.jp
2396) societegenerale.fr
2397) intelius.com
2398) mediabistro.com
2399) rincondelvago.com
2400) 3suisses.fr
2401) hichina.com
2402) emgoldex.com
2403) about.me
2404) dreammovies.com
2405) hotelscombined.com
2406) 1939.com
2407) seloger.com
2408) usenet.nl
2409) simplemachines.org
2410) computerbase.de
2411) cyberciti.biz
2412) gamersky.com
2413) mofos.com
2414) google.com.bo
2415) acer.com
2416) fazenda.gov.br
2417) liebiao.com
2418) wufoo.com
2419) tinychat.com
2420) ccbill.com
2421) 100fenlm.cn
2422) rising.cn
2423) 100ye.com
2424) olx.com.mx
2425) origo.hu
2426) chinatimes.com
2427) seek.com.au
2428) phoenix.edu
2429) afisha.ru
2430) pbs.org
2431) golem.de
2432) cuantocabron.com
2433) thedailyshow.com
2434) vmware.com
2435) ftd.de
2436) payoneer.com
2437) xungou.com
2438) hosteurope.de
2439) channel4.com
2440) today.com
2441) groupon.es
2442) bnet.com
2443) sblo.jp
2444) vrbo.com
2445) smi2.ru
2446) greatandhra.com
2447) opensubtitles.org
2448) asriran.com
2449) mydrivers.com
2450) motorola.com
2451) bytes.com
2452) clicrbs.com.br
2453) quibids.com
2454) amazon.ca
2455) speckyboy.com
2456) esuteru.com
2457) twipple.jp
2458) zamunda.net
2459) asklaila.com
2460) freenet.de
2461) arbeitsagentur.de
2462) crackberry.com
2463) adtech.info
2464) lyricsfreak.com
2465) footmercato.net
2466) 521g.org
2467) ingdirect.com
2468) wiley.com
2469) haber365.com
2470) bigcartel.com
2471) n4g.com
2472) yupoo.com
2473) index.hr
2474) push2check.com
2475) telecinco.es
2476) affiliatewindow.com
2477) pnc.com
2478) sport-fm.gr
2479) iconarchive.com
2480) rollingstone.com
2481) j-cast.com
2482) 77union.cn
2483) vbulletin.org
2484) arsenal.com
2485) oi.com.br
2486) folkd.com
2487) utro.ru
2488) mofosex.com
2489) spankwirecams.com
2490) tenki.jp
2491) chron.com
2492) tao123.com
2493) livingrichwithcoupons.com
2494) webmasterhome.cn
2495) nhaccuatui.com
2496) 20min.ch
2497) movshare.net
2498) enom.com
2499) opencart.com
2500) diandian.com
2501) sarenza.com
2502) balatarin.com
2503) lifenews.ru
2504) letao.com
2505) guru.com
2506) nzherald.co.nz
2507) azcentral.com
2508) ntvmsnbc.com
2509) lumosity.com
2510) nbc.com
2511) adtrackrs.com
2512) nastyvideotube.com
2513) itaringa.net
2514) peoplestring.com
2515) ebay.at
2516) userscripts.org
2517) qassimy.com
2518) hotklix.com
2519) jeddahbikers.com
2520) moheet.com
2521) stuff.co.nz
2522) hardware.fr
2523) videorewardcentral.com
2524) farmville.com
2525) sinowaypromo.com
2526) stockcharts.com
2527) magicmovies.com
2528) emol.com
2529) yle.fi
2530) csmonitor.com
2531) pgatour.com
2532) rapidlibrary.com
2533) filetram.com
2534) 293.net
2535) downloadha.com
2536) nhl.com
2537) ifolder.ru
2538) si.kz
2539) vedomosti.ru
2540) edreams.it
2541) xat.com
2542) telekom.de
2543) mforos.com
2544) zend.com
2545) bab.la
2546) punyu.com
2547) picofile.com
2548) oeeee.com
2549) bebo.com
2550) nj.com
2551) elheddaf.com
2552) bandcamp.com
2553) sport-express.ru
2554) sephora.com
2555) gmarket.co.kr
2556) allfacebook.com
2557) list-manage1.com
2558) ebayclassifieds.com
2559) 123greetings.com
2560) monografias.com
2561) icanhascheezburger.com
2562) dcinside.com
2563) megavod.fr
2564) nttdocomo.co.jp
2565) audible.com
2566) 4pda.ru
2567) rae.es
2568) 3dnews.ru
2569) olx.com.br
2570) eurogrand.com
2571) fotomac.com.tr
2572) vkrugudruzei.ru
2573) radio.com
2574) pornpros.com
2575) maultalk.com
2576) blic.rs
2577) torrentdownloads.net
2578) google.lv
2579) hankooki.com
2580) goo-net.com
2581) netshoes.com.br
2582) abclocal.go.com
2583) leonardo.it
2584) sap.com
2585) lachainemeteo.com
2586) link-assistant.com
2587) keywordspy.com
2588) samsclub.com
2589) facebook.net
2590) phpbb.com
2591) 500wan.com
2592) rai.it
2593) changyou.com
2594) ssrj.net
2595) foxbusiness.com
2596) waw.pl
2597) b2b.cn
2598) letsbuy.com
2599) abovetopsecret.com
2600) websitegrader.com
2601) v7n.com
2602) malaysiakini.com
2603) bigmir.net
2604) bangbros.com
2605) nvidia.com
2606) on.cc
2607) bancomercantil.com
2608) zakzak.co.jp
2609) criteo.com
2610) vworker.com
261
          StarDict (Linux) 3.0.1-1   
StarDict is a free translation tool for both individual word translation and translation of full sentences. This package is suitable for Ubuntu Linux.
          Dommo #400 21/mar/11   
En esta emisión de Dommo se comenta el crecimiento que ha tenido Netflix frente a Amazon y otros servicios de streaming de video. También se comenta el estudio que revela que las páginas Web se cargan más rápido en Android que en el iPhone. En entrevista Iván Zavala, vocero y coordinador de Iniciativas de Tecnologías de Información en FUMEC, nos platica sobre “Elevando la competitividad de México” que presentarán en alianza con Microsoft y Secretaría de Economía. Javier Metuk le ha instalado Ubuntu 10.10 a su netbook y ahora nos platica sobre la experiencia. Se recomienda el software Chandler para administrar un proyecto, actividades diarias o correo electrónico.
          Discontinuation of the Alpha 15 Multiplayer Lobby   
The Alpha 15 Osiris multiplayer lobby is now discontinued. Users of Alpha 15 are advised to upgrade to the latest version. The only distribution bundling Alpha 15 is Ubuntu 14.04 LTS, for which Alpha 16 is available in backports, and the latest version is available in our PPA. Alpha 15 was released December 2013 with … Continue reading
          Linux Tips: Desactivar el “touchpad” mentre s’escriu amb el teclat   
A l’arxiu /etc/X11/xorg.conf modificar la secció “InputDevice” afegint Option "SHMConfig" "on" sota l’apartat “Synaptics Touchpad” Section "InputDevice" Identifier "Synaptics Touchpad" ... Option "SHMConfig" "on" End Section Via: Ubuntu Forums – HowTo: Disable Synaptics Touchpad While Typing
          Give us mo' momo   
Katmandu Momo serves up crave-worthy Nepalese dumplings.

Since Saroja Shrestha and her husband, Kyler Nordeck, started the Katmandu Momo food truck in 2014, we've been following it around Little Rock and on social media in pursuit of our momo fix. What's a momo, you ask? It's a type of steamed South Asian dumpling, popular in Nepal, Tibet, Bhutan and parts of India, and Katmandu Momo's version is addictive.

Shrestha grew up in Katmandu, Nepal, and first came to the U.S. to attend Henderson State University in Arkadelphia. She stuck around Arkansas after college and met and married Nordeck. Shrestha enjoyed cooking, got acclaim from friends when she made momos and had a business administration degree, so she and Nordeck decided to open a food truck. After three years of success as a mobile eatery, they opened up shop in a corner stall of the River Market's Ottenheimer Hall earlier this year. The food truck remains in operation around town, but, man, oh man, are we glad to have a fixed location to get our fix of Nepalese deliciousness.

For those new to Katmandu Momo's cuisine, you're in luck: The options are few and all tasty. The steamed momos come filled with beef, chicken or veggies. They're roundish, creased together with a swirl on top (Shrestha assembles each momo). All the fillings are marinated in spices that may be somewhat familiar — cumin, coriander, turmeric — along with fresh garlic and ginger, but together, taste unlike anything we've tried before. They come with achar sauce, which is thin and tomato-based with hints of sesame oil and a slow heat. Dump the momos thoroughly in the achar sauce and lean all the way over your to-go container — the momos are juicy and, if you're not careful, you'll get splattered. We've had all varieties many times. They're all excellent, but we prefer the crunch of the veggie, which have a stewed-like quality, and apparently we're not alone. Nordeck says it's hard for Shrestha to make enough veggie to satisfy the demand.

The veggie momos are also vegan, as are all three of the sides. The long-grain jasmine fried rice was buttery and golden (we suspect there's a healthy seasoning of turmeric, saffron or both), with a wonderfully uneven toasted quality. It was like paella rice, but without the crunch. If the aloo dum, or spicy potato salad, was on a chain restaurant menu, it'd have a little hot pepper symbol next to it to warn the capsaicin-averse. It's spiced liberally with fennel, cumin, green onion and cilantro, and enough heat to make the crunchy, mild spring roll a nice go-between.

You can get large portions of each of the sides for $4, or get them as part of a combo. It's $8.99 for 10 momos, eight momos and a side or six momos and two sides.

Katmandu Momo
Ottenheimer Hall
400 President Clinton Ave.
351-4169
facebook.com/katmandumomo

Quick bite

Katmandu Momo's regular special is chicken chow mein, a smoky tangle of spaghetti mixed with blackened pieces of chicken and green pepper and covered with garam masala and other spices. It's a massive portion and, like everything else, delicious. You can get a veggie variety, too.

Hours

10:30 a.m. to 6 p.m. Monday through Saturday.

Other info

Credit cards accepted, no alcohol.



          Re: Mountain Goat: Ubuntu on Apple Hardware - Shoby   

His performance experience on his Mac with Ubuntu compared to OS X is nothing new.  It's not Unity that makes it seem fast.  It's Linux that makes it fast.  I've dual-booted Ubuntu from 9.04 to 11.04 along with Mac OS X Tiger and Leopard on my now-defunct iMac G5 (due to the bad caps issue) and the perrformance difference was quite noticeable. Ubuntu ran so much faster on that iMac G5 even with Compiz running, and the graphics chipset on it wasn't spectacular (Radeon 9600).  So his experience with a Linux distribution on Mac hardware comes as no surprise when you have been with OS X on it for some time.


          Re: 8-bit PC can run Ubuntu… takes about 6 hours to boot   

I remember doing a Debian "potato" installtion on an old Mac Quadra 650 that ran at 33 MHz.  Was fun when I didn't have life getting in the way to do stuff like this.  Still, the 68040 on that one is a 32-bit processor so this is still very geeky-cool. :-)


          Re: Jailbreak Only: Linux - Coming Soon To Your iPhone, iPad   

At least for the HP Touchpad, there is an Ubuntu port running on it that does use a soft keyboard, so I imagine it should also work on the iPad if something like Ubuntu were ported to it.  Although Wayland would likely become the display server for such a project, another project called Multi-pointer X (or MPX) seems to have been merged into the Xorg project, and that would allow for multitouch gestures on touchscreen devices like the iPad and the Touchpad.


          Re: LB - Episode 68 - Chad Likes Belts and Lord D Takes a Plop by Linux Basement   

I'm sure the Banshee developers would beg to differ with your statement. I personally find nothing wrong with Ubuntu. It's great that there's a distro out there that is aiming for the end user, and the fact that distros based on Ubuntu like Linux Mint, Trisquel, and others exist is proof of that. The issue here is Canonical and its decisions, or more specifically the decisions it tries to impose on other projects. As I said, they are not Apple and will never be Apple, so they need to stop acting like they're Apple. Apple is not a friend of FLOSS, so Canonical should not model itself after Apple.


          Backuppc installation Ubuntu 9.04   
Installing backuppc from the repository doesn’t result in accessing it over browser by typing http://localhost/backuppc Instead I had go a step further by appending the following snippet from /etc/backuppc/apache.conf to /etc/apache2/apache2.conf Alias /backuppc /usr/share/backuppc/cgi-bin/ AllowOverride None # Uncomment the line below to ensure that nobody can sniff importanti # info from network traffic during editing […]
          python jaunty django virtualization   
the first problem that i ran into after installing Ubuntu 9.04 is that I could not start my django application and it threw error. i had followed the django installation instructions available at its website and had made a symbolic link of my svn check out of django source to /usr/lib/python2.5/site-packages later I realized that […]
          another amachu   
amachu has been unique in google search’s first page results until yesterday when i found this blogger getting into top ten of search results using the very same nick amachu http://www.blogcatalog.com/user/amachu 🙂 Welcome to amachu nick name club 🙂 Hope she becomes a ubuntu member and fight with me over the nick name 🙂
          FossConf 2009 & Ubuntu Tamil Team   
We had a Stall for people who visited FossConf-2009 held at Thiagaraya Engineering College, Madurai from Feb 27 to Mar 01. Padmanathan of our Team was there fulltime for all three days explaining all that we do. I was there for the second day. And we shared Fedora DVDs too for the participants. Thanks to […]
          When Vellore woke up for Ubuntu   
Dec 21, 2008. The IT Association of  Vellore(1), the fort City of Tamilnadu, had organized a day of ‘Ubuntu Dawn’ for its members and general public on at Hotel Aavana Inn, Vellore. More than forty people from Vellore, Aarani and other places of Vellore district got benefitted as a result of this event. Different modes […]
          Udhagmandalam & Ubuntu   
Udagamandalam, Nov 28. Computer World of Udagamandalam (a.k.a Ooty), hosted Intrepid Ibex release event at Lawly Institute. Wide range of people ranging form School Teachers, Students, Computer Vendors, Government Officials tokk part in the event. Variety of topics starting from different modes of Installing Ubuntu, Package Management, Administration, Tamil features were discussed and demonstrated. Special […]
          Port trunking Linux   

Bonding, también llamado port trunking o agregación de enlaces que consiste en combinar varias interfaces de red (NIC) a un solo enlace, ya sea proporcionando alta disponibilidad, balanceo de carga, máximo rendimiento, o una combinación de éstos.

Lo primero es instalar ifenslave, en Ubuntu 12.10:

sudo apt-get install ifenslave-2.6
echo "bonding" >> /etc/modules

Con la red parada, editamos la configuración de las interfaces:

sudo vi /etc/network/interfaces

Pego la configuración típica, puede que el nombre de las interfaces de red sea diferente:

#eth0 es esclavo de bond0
auto eth0
iface eth0 inet manual
bond-master bond0

#eth1 es esclavo de bond0
auto eth1
iface eth1 inet manual
bond-master bond0

# bond0 is the bonded NIC and can be used like any other normal NIC.
# bond0 is configured using static network information.
auto bond0
iface bond0 inet static
address 192.168.1.10
gateway 192.168.1.1
netmask 255.255.255.0
# bond0 uses standard IEEE 802.3ad LACP bonding protocol 
# bond-mode 802.3ad = 4
bond-mode 4
#bond-mode 0 Los switches viejos soportan mejor este modo.
#bond-mode 2 balance-xor (este modo es casi igual al 802.3ad pero lo soportan mas switches.
bond-miimon 100
bond-lacp-rate 1
bond-slaves none

Podemos ver el estado del enlace con el siguiente comando, es necesario decirle al switch donde lo vamos a conectar que esos puertos son de tipo LACP.

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 2
        Actor Key: 17
        Partner Key: 4
        Partner Mac Address: fc:75:16:5d:bb:a9

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:22:d3:e4:74
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:22:d3:e4:74
Aggregator ID: 1
Slave queue ID: 0

Podéis ver el manual completo en la siguiente URL: https://help.ubuntu.com/community/UbuntuBonding


          III FESTIVAL INTERNACIONAL DE SOFTWARE LIBRE - GNU/LINUX . FESOLI 2009   

FESOLI 2009

Será el 14 de noviembre y promete superar ediciones de 2007 y 2008
UNIVERSIDAD GARCILASO REALIZARÁ III FESTIVAL INTERNACIONAL DE SOFTWARE LIBRE–GNU/LINUX. FESOLI 2009.

La Facultad de Ingeniería de Sistemas, Cómputo y Telecomunicaciones de la
Universidad Inca Garcilaso de la Vega, Lima - Perú, realizará la tercera edición
del Festival Internacional de Software Libre-GNU/Linux, denominado FESOLI
2009, con la participación de renombrados expositores nacionales y
extranjeros, quienes estarán concentrados en las instalaciones de la Facultad
el próximo 14 de noviembre.

El certamen académico lleva como título “Software Libre en la Empresa y el
Estado en el marco de la Crisis Mundial. Casos de Éxito”, el mismo que tiene
como objetivo presentar diversas experiencias en investigación, en el desarrollo
de proyectos, casos de éxito y soluciones, basados en Software Libre. Dichas
experiencias están orientadas a satisfacer las necesidades de la sociedad, la
empresa y el Estado.

FESOLI 2009 es un evento académico que está dirigido a la comunidad
académica y científica, líderes responsables en Tecnologías de la Información
(TI), profesionales del área de la computación, sistemas, informática y
telecomunicaciones, así como integrantes de las diversas comunidades de
software libre distribuidas en nuestro país.

Entre los temas que se abordarán en el FESOLI 2009 se encuentran: Derechos
de autor, patentes y licencias en la cultura del Software Libre; Herramientas
libres para el desarrollo de aplicaciones en la industria del software; Modelo de
migración al Software Libre; Plan estratégico del Software Libre para
comunidades en vías de desarrollo; Uso del Software Libre como alternativa
frente a la exclusión y la brecha digital; entre otros temas de singular
relevancia.

Durante el evento habrá Ponencias y Conferencias Magistrales a cargo de
profesionales nacionales e internacionales especialistas en el Software Libre.
Asimismo, se desarrollará una Mesa Redonda, donde se debatirá la adopción
del software libre en el Estado y Empresa en el marco de la Crisis Mundial.
Además, en el Stand participarán diversas empresas auspiciadoras e invitados
en el cual, se demostrarán los casos de éxito y distintas soluciones bajo la
plataforma del Software Libre. Mientras que los Talleres tratarán sobre las
soluciones en diversas implementaciones de Software Libre y en donde el
usuario podrá conocer las ventajas de su uso.

Los expositores internacionales que han asegurado su participación en el
FESOLI 2009 son: el Presidente y Director Ejecutivo de Linux Internacional,
Jon “maddog” Hall (EEUU); el Desarrollador y responsable de las series
estables del núcleo 2.4, Marcelo Tosatti (Brasil); el Director Gerente de Dokeos
Latinoamérica en Perú: el Cofundador de Hispalinux y ditor de Barrapunto.com,
Juan José Amor y experto en PHP con más de cinco años de experiencia,
Yannick Warnier (Bélgica).

En tanto, los ponentes nacionales que han confirmado su asistencia al FESOLI
2009 se encuentran: el Gerente General de Antartec S.A.C, Alfredo Zorrilla; el
Director del Centro Open Source, Alfonso de la Guarda; Proyecto Xendra ERP;
Francisco Morosini, representante de la Comunidad UBUNTU; Nicolás
Valcárcel; y el especialista en temas de Uso y Aplicaciones de TI para el
Estado Peruano ONGEI, César Vílchez; entre otros destacados especialistas.

Cabe mencionar que los miembros de la Comunidad de Software Libre
Garcilasina (COSOLIG), integrado por estudiantes de la Facultad de Ingeniería
de Sistemas, Cómputo y Telecomunicaciones de la UIGV, y docentes de esta
casa de estudio, están a cargo de la organización de este magno evento.

Fuente: Oficina de Marketing e Investigación de Mercado de la UIGV


          How to compare dates in Java   
1. Date.compareTo()
A classic method to compare two dates in Java.
– Return value is 0 if both dates are equal.
– Return value is greater than 0 , if Date is after the date argument.
– Return value is less than 0, if Date is before the date argument.


package com.mkyong.date;

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

public class App
{
public static void main( String[] args )
{
try{

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
Date date1 = sdf.parse("2009-12-31");
Date date2 = sdf.parse("2010-01-31");

System.out.println(sdf.format(date1));
System.out.println(sdf.format(date2));

if(date1.compareTo(date2)>0){
System.out.println("Date1 is after Date2");
}else if(date1.compareTo(date2)<0){
System.out.println("Date1 is before Date2");
}else if(date1.compareTo(date2)==0){
System.out.println("Date1 is equal to Date2");
}else{
System.out.println("How to get here?");
}

}catch(ParseException ex){
ex.printStackTrace();
}
}
}


2. Date.before(), Date.after() and Date.equals()
A more user friendly method to compare two dates. Java developers seldom use this method, not really sure what may causing it, may be they just didn't aware of it?

package com.mkyong.date;

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

public class App
{
public static void main( String[] args )
{
try{

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
Date date1 = sdf.parse("2009-12-31");
Date date2 = sdf.parse("2010-01-31");

System.out.println(sdf.format(date1));
System.out.println(sdf.format(date2));

if(date1.after(date2)){
System.out.println("Date1 is after Date2");
}

if(date1.before(date2)){
System.out.println("Date1 is before Date2");
}

if(date1.equals(date2)){
System.out.println("Date1 is equal Date2");
}

}catch(ParseException ex){
ex.printStackTrace();
}
}
}


3. Calender.before(), Calender.after() and Calender.equals()
Most common way to compare two dates with Calendar object, and believe it's most widely used method.
package com.mkyong.date;

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;

public class App
{
public static void main( String[] args )
{
try{

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
Date date1 = sdf.parse("2009-12-31");
Date date2 = sdf.parse("2010-01-31");

System.out.println(sdf.format(date1));
System.out.println(sdf.format(date2));

Calendar cal1 = Calendar.getInstance();
Calendar cal2 = Calendar.getInstance();
cal1.setTime(date1);
cal2.setTime(date2);

if(cal1.after(cal2)){
System.out.println("Date1 is after Date2");
}

if(cal1.before(cal2)){
System.out.println("Date1 is before Date2");
}

if(cal1.equals(cal2)){
System.out.println("Date1 is equal Date2");
}

}catch(ParseException ex){
ex.printStackTrace();
}
}
}



nguồn : http://www.mkyong.com/java/how-to-compare-dates-in-java/
          DNIe en Kubuntu   

DNIeHoy he tenido la suerte de poder probar el DNIe en GNU/Linux exactamente en Kubuntu.

Las claves son de 2048 bits, hay dos en la tarjeta (una para certificados de autenticación y el de firma digital) ambas claves tienen su clave privada y pública y la privada nunca abandona la tarjeta, el pin es de 8 a 16 caracteres.

En primer lugar hay que instalar el driver del dispositivo lector de la tarjeta, en mi caso que es una distribución basada en Debian sería algo así:

$ sudo apt-get install pcscd pcsc-tools libpcsclite1 libccid

Una vez instaladas las dependencias necesarias para el lector, instalamos los paquetes que recomiendan en la web del DNIe en mi caso bajo los paquetes para Ubuntu Dapper, (tengo instalada feisty). Descomprimo el fichero comprimido y a continuación instalo los paquetes:

$ sudo dpkg -i libopensc2_0.11.1-svn1_i386.deb \
libopensc2-dev_0.11.1-svn1_i386.deb \
mozilla-opensc_0.11.1-svn1_i386.deb \
opensc_0.11.1-svn1_i386.deb \
opensc-dnie_1.2.1-3_i386.deb

Si por alguna razón nos falla podemos probar a ejecutar:

$ sudo apt-get -f install

Ahora nos queda descargarnos de Internet el fichero de tarjetas (este fichero contiene las descripciones de cada tipo de tarjeta y sus códigos asociados):

$ wget http://ludovic.rousseau.free.fr/softwares/pcsc-tools/smartcard_list.txt \
--output-document=$HOME/.smartcard_list.txt

Ahora ya tenemos instalado todo lo necesario para que funcione el DNIe en nuestro pc con GNU/Linux, solo nos queda probar que nos lee la tarjeta.

Arrancamos el servicio para leer la tarjeta y arrancamos el ejecutable para ver el estado:

$ sudo /etc/init.d/pcscd start
$ pcsc_scan

Si todo ha funcionado correctamente debería salir algo así con colores:

PC/SC device scanner
V 1.4.8 (c) 2001-2006, Ludovic Rousseau <ludovic.rousseau[at]free.fr>
Compiled with PC/SC lite version: 1.3.2
Scanning present readers
0: C3PO LTC31 (11061005) 00 00

Mon Oct 22 18:31:46 2007
 Reader 0: C3PO LTC31 (11061005) 00 00
  Card state: Card removed,

Inserto la tarjeta:

Mon Oct 22 18:33:11 2007
 Reader 0: C3PO LTC31 (11061005) 00 00
  Card state: Card inserted,
  ATR: 3B 7F 38 00 00 00 6A 44 4E 49 65 20 02 4C 34 01 13 03 90 00

ATR: 3B 7F 38 00 00 00 6A 44 4E 49 65 20 02 4C 34 01 13 03 90 00
+ TS = 3B --> Direct Convention
+ T0 = 7F, Y(1): 0111, K: 15 (historical bytes)
  TA(1) = 38 --> Fi=744, Di=12, 62 cycles/ETU (57600 bytes/s at 3.57 MHz)
  TB(1) = 00 --> VPP is not electrically connected
  TC(1) = 00 --> Extra guard time: 0
+ Historical bytes: 00 6A 44 4E 49 65 20 02 4C 34 01 13 03 90 00
  Category indicator byte: 00 (compact TLV data object)
    Tag: 6, len: A (pre-issuing data)
      Data: 44 4E 49 65 20 02 4C 34 01 13
    Mandatory status indicator (3 last bytes)
      LCS (life card cycle): 03 (Initialisation state)
      SW: 9000 (Normal processing.)

Possibly identified card (using /home/luipeme/.smartcard_list.txt):
3B 7F 38 00 00 00 6A 44 4E 49 65 20 02 4C 34 01 13 03 90 00
        DNI electronico (Spanish electronic ID card)
        http://www.dnielectronico.es

Ahora podemos probar a cambiar el PIN de la tarjeta, tenemos que acceder al servicio telemático de cambio de PIN, descomprimimos el fichero y ejecutamos:

chmod +x Cambio_de_PIN.sh
./Cambio_de_PIN.sh

DNIe captura de pantalla
Nos preguntará el pin antiguo y el nuevo, es bastante incomodo ya que tienes que introducir el PIN con unas pantallas con letras, números y símbolos.


          Quicktips 1: Windows 7 Libraries; New website   

I’m working on several large posts right now. So in the interim, I’ve decided to do shorter posts that contain something I find very helpful. This is the first.

I’ve been using Windows 7 since April 2010. It’s the first OS I’ve ever worked with that I actually enjoy. I’ve used many over the years (KERNAL; PC DOS; MS-DOS 3.x+; Windows 3.0, 3.11, 95, 98, 98 SE, Me, NT 3.51, NT 4, 2000, XP, Vista, 7; various GNU/Linux distros starting with Debian 1.2 – most recently Ubuntu 10.04; ProDOS, Mac OS 9.X, Mac OS X (through 10.4); SunOS, Solaris; AIX, z/OS; OpenVMS). Some were frustrating. Some tolerable. Some were “nice except for…”. OS X actually started out as seemingly “nice” until every single release contained a breaking change to some major API and they then decided to flip-off everyone who had bought a Mac as little as two years earlier with the release of Snow Leopard without PPC support. Windows 7 is the first one that’s just “nice” without any qualifiers. There are so many little features that add up to make it nice. Today’s Quicktip is one of them.

Quicktip 1: Create a Library for your Code

One thing I particularly like about Windows 7 is the Libraries feature in Explorer. Specifically the fact that you can create custom ones. I used to spend a lot of time opening new Explorer windows and navigating my various Visual Studio projects folders. Custom libraries allowed me to simplify that whole process. I now simply go to my “Code” library and there it all is.

Adding a new library is easy. Open an Explorer window. If you aren’t in your Libraries when it opens, navigate to Libraries. Click the “New library” button. Give it a name. Then right click on the new library you created and go to “Properties”. Click the “Include a folder…” button. Choose the folder you want and press “Include folder”. Voilà! If you wish to add more, simply click “Include a folder…” again and repeat. It’s true that this is just a small time saver. But it’s one of those things that just adds a really nice touch.

------------------------

In a separate note, just before Christmas I finally finished and published my new website: http://www.bobtacoindustries.com/ . I waited to post here about it until I found time to incorporate a few things I hadn’t had the time to do when I pushed it out for its “soft open”. Most of them are now done and so my site is now formally open. I have no plans or intentions of moving my blog ( http://blog.bobtacoindustries.com/ points here). I quite like it here, both in terms of the interface and also in terms of the concept (and realization thereof) of pooling geek bloggers to create a pool of knowledge and helpful tips, tricks, techniques, and advice.

I created it simply because I felt that it was time to have a website as I venture further into my return to the land of software development. The “For Devs” section should hopefully be useful to developers, particularly the links section. It’s my curated list of sites that I regularly visit to solve problems, to help answer questions on Twitter and the AppHub forums, and to learn new things. I’ll be adding links to it periodically and will be including topic areas as I become acquainted with them enough to form a proper list. WPF will likely be the first topic area added.

If there are any links you think I should add to the existing topics, let me know! I warn in advance that I’m less inclined to add blogs; there are simply too many good blogs and I do not want to have hundreds per topic area. So blogs are limited primarily, though not exclusively, to acknowledged experts in the subject area who generally blog regularly about it and who usually are part of the team that develops the product or technology in question. I’m much more amenable to including individual blogs posts in the techniques subcategory in the appropriate topic area. Ultimately, it’s a collection of things I find interesting and helpful. So please no hard feelings if I don’t add a link you think is awesome. I may well think it’s awesome too, but conclude that it doesn’t fit with my goals for the dev links area.


          How to disable the guidance-power-manager from autostarting in kubuntu   
If you’re one of those using kde 4.2 on kubuntu, and getting annoyed at guidance-power-manager starting up all the time while you instead want to use the more integrated powerdevil, here’s the clean solution. The Autostart spec says that adding Hidden=true to a desktop file says that it should be completely ignored. This is, by […]
          A binary package-based system with own patches?   
I’m increasingly getting an urge to expanding my hackings into other parts of kde, as to actually fix the small things that annoy me myself. There is only one problem though; I am very happy with the updated packages the kubuntu team supplies, and especially with my small laptop being my only available system, keeping […]
          Comment on VPN in Ubuntu 9.04 (Jaunty) by Phil   
For future readers: Ubuntu 14.10 here - I needed to further select "128-bit (most secure)" from the "Security" dropdown. The default of "All Available" did not work for me.
          Comment on NumLock broken in Ubuntu by John   
Thanks for the tip. It worked for me.
          Comment on NumLock broken in Ubuntu by Enrico   
Thanks a lot! I had the same problem!
          Comment on NumLock broken in Ubuntu by Connor   
Thank you!
          How to install Ansible on Ubuntu !   
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

          Using the Sun to Power Your RV   
Jumping in your RV and leaving the rat race for the weekend is an American tradition. Did you know you can provide power to your RV with the sun while getting away from it all?

The Sun is Everywhere!

One of the biggest misconceptions regarding solar power is that it is limited to large panel systems on roofs. Au contraire! With new nanotechnology, solar power systems will soon be applied with the paint you use to improve your home. That’s still two or three years away, so what about now?

If you enjoy taking the RV out for an excursion, you can use solar power to provide your electrical needs. Whether you are going camping or to a NASCAR race, it is an exceedingly simple process.

Unlike homes, RVs run on direct current electricity. This makes them perfect for solar electricity since solar systems produce direct current electricity instead of alternating current. Put another way, there is no need for bulky converters to flip the electricity from direct to alternating. Instead, you can use the sun to power up your batteries directly.

Portable solar systems consist of pop-up solar modules with four or five panels. Essentially, they look like small ladders with solar panels instead of steps. You just pop them up on the roof of the RV or in an area where the sun hits them. The systems tie directly into your batteries and power them up during the day. Super easy and super clean.

The real advantage to solar RV systems has to do with noise. The traditional method for recharging your RV batteries is to turn on a generator and generators can be very loud. Even the quietest generator makes enough noise to make you feel like you live next to a construction site. Solar systems make no noise at all. There are no moving parts, just the sun beating down on the panels. You’ll never know they are even there.

If RVing is your thing, portable solar modules are worth taking a look at. With high fuel prices, you need to save a buck wherever you can.
          Comment on Domenii…part2 by WladyX   
My gentoo days are over, long live Arch&Ubuntu :)
          SPEED UP FILE COPYING   

USE TERACOPY TO SPEED UP FILE COPYING


TeraCopy from Code Sector is a free file-copying utility that offers more speed and security than Windows. It's a compact tool that can quickly copy or move single files or batches of files to any directory you select, but it does much more, such as automatically calculating CRC checksum values to speed up the validation process. It also skips bad files during the copying process, displaying them at the end of transfers so you can see just which ones need replacing or other attention.
TeraCopy's user interface is a pair of efficient dialogs, one an icon-based control panel that you use to add, copy, move, test, and delete files, and a second interface that pops up to do the work. After we installed TeraCopy, it opened with this second interface in minimized mode, a tiny dialog with twin file directory fields--one for source files, the other for the target folder--that double as progress bars for file transfers. Clicking More expands the interface to a multifile view for batch operations and accesses the Clean Up, Verify, and Delete controls as well as a file menu button that includes Options; you can also access this interface from the Start Menu. A drop-down menu lists recent operations with time stamps for quick retrieval. Selecting TeraCopy on a file's properties menu calls up a different, icon-based navigation and control panel. We opened this interface and used the browsing tool to add a file to copy and create a destination folder, and then clicked Copy. The operation was successful but concluded so quickly that we had to open the target folder and check the file's properties to verify that anything happened at all. We also tried the Test feature in this view, which verified an ubuntu ISO disk image in about 2 seconds. You can even associate TeraCopy with .sfv and .md5 files in its options dialog or during installation.
TeraCopy is a nifty piece of freeware that improves the copy/move function in Windows and adds useful extras like checksum calculation and permanent delete. We tried it in both Windows 7 and XP, and recommend it for all Windows users.

Using TeraCopy
TeraCopy uses dynamically adjusted buffers to reduce seek time as well as asynchronous copy to speed up the transfer time.
Copy With Vista  
For this example I am transferring one of my mp3 collections to my local C: drive from my external.  The size of this folder is 56.1 GB and would take quite a while copying over by using the drag and drop method.
There are a couple ways to go about using TeraCopy.  The first is to open the TeraCopy user interface and drag the files or folder you want to copy into TeraCopy. From there you can copy them over by clicking the Copy or Move button and selecting a location.
Another way to use TeraCopy is right click on the folder you want to transfer then select TeraCopy from the menu.  This will open up the TeraCopy user interface and copy all files into the application.
You will see progress indicators in the user interface while the files are copying, and you can pause, resume, or cancel the process.  In the instance of an error TeraCopy will only skip the file after trying several times to correct the issue. If nothing can be done, TeraCopy will only skip the file and not cancel the entire process.  This is a huge advantage compared to relying on a Windows transfer.
 
Opening up the Options and Preferences you can integrate TeraCopy into the Windows and use it as the default copy handler.
Another cool thing to mention is that teracopy portable is available and is compatible with portable apps.  TeraCopy completed the 56.1 GB file transfer in under an hour.  While the transfer was being executed I noticed no lag time on any of the other things I was doing on my computer. It was as though it was not running at all.


DOWNLOAD TERACOPY

          Update-Benachrichtigungen mit Xfce & Debian   

Im Gegensatz zu anderen Desktopumgebungen hat Xfce keine eigene Updateroutine und benachrichtigt somit auch nicht über bereitstehende Aktualisierungen. Bei Xubuntu behilft man sich über das „Indicator Applet“ in der Werkzeugleiste. Weiterlesen »

          Ubuntu OpenStack Dev Summary – 12th June 2017   
Welcome to the second Ubuntu OpenStack development summary! This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu. If there is something that you would like to see covered in future summaries, or […]
          Ubuntu OpenStack Dev Summary – 22nd May 2017   
Welcome to the first ever Ubuntu OpenStack development summary! This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu. If there is something that you would like to see covered in future summaries, […]
          OpenStack Newton B3 for Ubuntu   
The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack Newton B3 milestone in Ubuntu 16.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Ubuntu 16.04 LTS You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on Ubuntu 16.04 installations by running the following commands: sudo add-apt-repository cloud-archive:newton […]
          OpenStack 2015.1.0 for Ubuntu 14.04 LTS and Ubuntu 15.04   
The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack 2015.1.0 (Kilo) release in Ubuntu 15.04 and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive. Ubuntu 14.04 LTS You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:  sudo add-apt-repository cloud-archive:kilo sudo […]
          Neutron, ZeroMQ and Git – Ubuntu OpenStack 15.04 Charm release!   
Alongside the Ubuntu 15.04 release on the 23rd April, the Ubuntu OpenStack Engineering team delivered the latest release of the OpenStack charms for deploying and managing OpenStack on Ubuntu using Juju. Here are some selected highlights from this most recent charm release. OpenStack Kilo support As always, we’ve enabled charm support for OpenStack Kilo alongside […]
          OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04   
The Ubuntu OpenStack Engineering team is pleased to announce the general availability of the first release candidate of the OpenStack Kilo release in Ubuntu 15.04 development and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive. Ubuntu 14.04 LTS You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running […]
          Script to Install Squid Proxy Server on Ubuntu on EC2 or Cloud VPS by daxplicitazn   
I am currently looking for a Freelance programmer to create a script for that can set up a Squid3 Proxy Server on multiple Ubuntu 14.04 servers at one time. It must automatically log in to a list of servers... (Budget: $30 - $250 USD, Jobs: Amazon Web Services, Debian, Squid Cache, Ubuntu, VPS)
          Comment on How To Add Templates In Ubuntu Context Menu [Tip] by Tristán   
Very good tip, but, is there a way to do it for all users at once? I mean, not to do it inside the home folder?By the way, in Spanish the word must be "Plantillas" instead of Templates. I guess that in other languages it will be different as well.
          UNetbootin 585   
UNetbootin allows you to create bootable Live USB drives for Ubuntu, Fedora, and other Linux distributions without burning a CD.
          Comment on Compose key with XFCE by Pepo   
I'm using xubuntu 12.10. There's no xorg.conf, but the /etc/default/keyboard file has the XKBOPTIONS variable that you can set as you did with XkbOptions. Hope it helps.
          Comment on Add GNOME applets to the Xfce panel by bapoumba   
May be here : http://ubuntuforums.org/showthread.php?t=825896&p=8497945#post8497945 ?
          Comment on How to check an Ubuntu .iso file by nfa firearms   
For latest information you have to pay a visit internet and on world-wide-web I found this web site as a most excellent site for latest updates.
          Comment on SNMP Cannot Find Module by Jamshid   
Anyone else having trouble installing that -- note that package is only available if you add this to /etc/apt/sources.list (don't forget "apt-get update"). ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse" >> /etc/apt/sources.list deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse" >> /etc/apt/sources.list deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse" >> /etc/apt/sources.list deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse" >> /etc/apt/sources.list
          Comment on SNMP Cannot Find Module by snapshot   
Awesome! Works also with Ubuntu 13.10
          Un peu de pub…   
Z’avez vu la dernière pub? Voici un petit clip qui a été présenté plus tôt cette semaine au sommet des développeurs Ubuntu. Get your own valid XHTML YouTube embed code Lien vers le vidéo (si vous visionner à partir du planet-ubuntu francophone). Également à lire.....Edubuntu en weblive (5)Installation de LibreOffice dans Ubuntu 10.04 LTS et […]
          Jeux payants dans la logithèque Ubuntu   
Si vous faites une petite recherche dans la logithèque Ubuntu, vous remarquerez qu’une application propriétaire ET payante, un jeu dans notre cas, y a été ajoutée. Je ne suis pas un « libre à tout prix », mais j’ignore encore si je dois m’en réjouir ou pleurer…. Chose certaine, c’est à suivre avec intérêt. Et vous? p.s. […]
          Link: Guide to installing Qbuntu (Ubuntu 16.04) TemplateVM in Qubes 3.2   

Idiot's guide to installing Qbuntu (Ubuntu 16.04) TemplateVM in Qubes 3.2 (w/optional desktop) : Qubes

Pro Memoria. Untested by me at this point in time.


          Initial ideas on how to get sshd into a Ubuntu server install   

To add packages for the installation, add a %package section to the ks.cfg kickstart file, append to the end of ks.cfg file something like this.

%packages
@ ubuntu-server
openssh-server
ftp
build-essential

The above is from:

system installation - How do I create a completely unattended install of Ubuntu? - Ask Ubuntu

A skeleton kickstart file looks like this:

#
#Generic Kickstart template for Ubuntu
#Platform: x86 and x86-64
#

#System language
lang en_US

#Language modules to install
langsupport en_US

#System keyboard
keyboard us

#System mouse
mouse

#System timezone
timezone America/New_York

#Root password
rootpw --disabled

#Initial user (user with sudo capabilities) 
user ubuntu --fullname "Ubuntu User" --password root4me2

#Reboot after installation
reboot

#Use text mode install
text

#Install OS instead of upgrade
install

#Installation media
cdrom
#nfs --server=server.com --dir=/path/to/ubuntu/

#System bootloader configuration
bootloader --location=mbr 

#Clear the Master Boot Record
zerombr yes

#Partition clearing information
clearpart --all --initlabel 

#Basic disk partition
part / --fstype ext4 --size 1 --grow --asprimary 
part swap --size 1024 
part /boot --fstype ext4 --size 256 --asprimary 

#Advanced partition
#part /boot --fstype=ext4 --size=500 --asprimary
#part pv.aQcByA-UM0N-siuB-Y96L-rmd3-n6vz-NMo8Vr --grow --size=1
#volgroup vg_mygroup --pesize=4096 pv.aQcByA-UM0N-siuB-Y96L-rmd3-n6vz-NMo8Vr
#logvol / --fstype=ext4 --name=lv_root --vgname=vg_mygroup --grow --size=10240 --maxsize=20480
#logvol swap --name=lv_swap --vgname=vg_mygroup --grow --size=1024 --maxsize=8192

#System authorization infomation
auth  --useshadow  --enablemd5 

#Network information
network --bootproto=dhcp --device=eth0

#Firewall configuration
firewall --disabled --trust=eth0 --ssh 

#Do not configure the X Window System
skipx

Taken from here: 4.6. Automatic Installation


          Official UDS-O group photo and personal photo set   
After 8544 miles traveled, 16 days on the road, 40 GB of RAW photos, and two days of post-travel coma, I’ve posted the official group photos and my personal photo set from Ubuntu Developer Summit Oneiric Ocelot (UDS-O) which took place at the opulent Corinthia Hotel Budapest (formerly Grand Hotel Royal), Budapest, Hungary, EU (9th […]
          Gallery: UDS Oneiric Logistics (April 2011)   
Ubuntu Developer Summit Oneiric Ocelot (UDS-O) Logistics at Canonical Group Limited, London, England, UK – 11th – 15th April 2011 [smugmug url=”http://photos.pixoulphotography.com/hack/feed.mg?Type=gallery&Data=16650741_kFjH28&format=rss200″ imagecount=”400″ start=”1″ num=”400″ thumbsize=”Th” link=”smugmug” captions=”false” sort=”false” window=”true” smugmug=”true” size=”L”]
          Official UDS-N group photo and personal photo set   
I’ve posted the official group photo and my personal photo set from Ubuntu Developer Summit Natty Narwhal (UDS-N) which took place at The Caribe Royal, Orlando, Florida, USA – 25th – 29th October 2010. Overall it was quite a productive trip and, in addition to working event support, running video cameras, photographing the event, and […]
          Gallery: UDS Maverick (May 2010)   
Ubuntu Developer Summit Maverick Meerkat (UDS-M) at Dolce La Hulpe Hotel and Resort, Brussels, Belgium, EU – 10th – 14th May 2010 [cc by-sa 2010 Sean Sosik-Hamor] [smugmug url=”http://photos.pixoulphotography.com/hack/feed.mg?Type=gallery&Data=14501193_kBkYg&format=rss200″ imagecount=”400″ start=”1″ num=”400″ thumbsize=”Th” link=”smugmug” captions=”false” sort=”false” window=”true” smugmug=”true” size=”L”]
          Gallery: UDS Lucid (November 2009)   
Ubuntu Developer Summit Lucid Lynx (UDS-L) at The Renaissance Dallas Hotel, Dallas, Texas, USA – 16th – 20th November 2009 [cc by-sa 2009 Sean Sosik-Hamor] [smugmug url=”http://photos.pixoulphotography.com/hack/feed.mg?Type=gallery&Data=14643169_zmAYA&format=rss200″ imagecount=”250″ start=”1″ num=”250″ thumbsize=”Th” link=”smugmug” captions=”false” sort=”false” window=”true” smugmug=”true” size=”L”]
          LAMP: la tecnología detrás de un proyecto web   

En los últimos días paso una gran cantidad de horas en dos proyectos enfocados a las TICs que me tienen muy entretenido, uno en el plano profesional y el otro en el plano educativo. El primero de ellos es bastante complejo y con un perfil realmente de alto nivel, donde está involucrado un equipo de profesionales que va desde diseñadores, programadores, gente de marketing, y claro un líder de proyecto; se trata de una aplicación para Android enfocado al mantenimiento industrial que aprovecha la realidad aumentada.

El otro es menos complejo pero con gran potencial, se trata de un sitio web con fines educativos que toma lo mejor de la Web 2.0 y lo lleva a los estudiantes para que los mismos puedan potencializar el desarrollo de sus capacidades y competencias académicas, además de utilizar a las redes sociales como una valiosa herramienta dentro del ámbito educativo. De este proyecto en particular quisiera hablar un poco más.

¿Qué hay detrás de ese proyecto? ¿Cuáles son las tecnologías que lo potencian? Simple: LAMP. LAMP es un acrónimo del conjunto de elementos que permiten ejecutar un proyecto web sin necesidad de una inversión en tiempo y esfuerzo significativa. Linux, Apache, MySQL y PHP son LAMP. Vayamos por partes entonces.

+++

Linux, o GNU Linux, es el sistema operativo que le da vida al proyecto, equivalente a Windows o Mac OS X, pero en esteroides; es libre, configurable, flexible, potente y efectivo. Existe una gran cantidad de versiones, la mayoría de ellas gratuítas. En lo personal tengo experiencia usando Debian, openSUSE, XandrOS y Ubuntu. En este proyecto en específico estoy usando Ubuntu 10.10 sobre una plataforma Intel de doble núcleo y todo es como caminar sobre algodones, además, si eres novato en sistemas operativos diferentes a Windows entonces Ubuntu es para tí, su eslogan lo dice todo "Linux para seres humanos". Ubuntu es muy fácil de instalar, de configurar y claro: de utilizar. Definitivamente Linux es la columna vertebral de todo proyecto de sitio web que aspire a ser exitoso.



Apache, cuyo nombre oficial es Apache HTTP Server, se trata de un servidor web de código abierto y multiplataforma (Linux, Windows, BSD, Mac OS, etc.), resalta por sus características de confiabilidad, es configurable, modular y en términos generales fácil de utilizar. Además de todo, hoy en día, es el más popular en su categoría.

MySQL es un sistema de gestión de base de datos. Desarrollado por Sun Microsystems y soportado actualmente por Oracle Corp., tiene características importantes como que trabaja con modelos relacionales, es multihilo y por supuesto, multiusuario. Al igual que Apache, MySQL es multiplataforma, lo que ha generalizado su uso en otros horizontes fuera del proyecto GNU Linux. Este elemento se encarga de ser la interfaz que une a la base de datos con el usuario y las aplicaciones que se ejecutan.

PHP, la P del acrónimo también puede ser entendida como Perl, PHP o Python; todos lenguajes de programación. En lo particular yo trabajo con PHP lo que me permite desarrollar sitios web dinámicos. PHP es muy parecido a C, razón por la cual me incliné a usar este lenguaje. También es multiplataforma y es sumamente útil para proyectos desde simples hasta muy complejos.

En conjunto Linux, Apache, MySQL y PHP me brindan grandes ventajas: es fácil y rápido construir un proyecto, es efectivo a la hora de ejecutar, desarrollo mi proyecto de manera local y me resulta económico en tiempo, esfuerzo y claro, dinero. Si están trabajando un proyecto de sitio web, dénle una oportunidad a LAMP. Si no te agrada demasiado Linux también podrías optar por configuraciones WAMP (Windows) o MAMP (Mac OS).

+++


          Dispuestos a pagar   
Tanto se habla de la piratería en el mundo entero que resulta hasta redundante tocar el tema en un simple blog como el mío. Pero me ha llegado un pensamiento a mi mente que me gustaría compartir.

Veo en TV un comercial dónde sale Alex Lora (el del Tri) y otro fulano diciendo “Si yo fuera taquero me pagarías…”, por supuesto, pero también cito otra frase mundana del internet: “Si no puedes comprarlo, entonces descárgalo”.

Recientemente Apple, Inc. ha lanzado una tienda de aplicaciones para su plataforma Mac OS X Snow Leopard (en adelante) muy al estilo de la App Store para dispositivos corriendo iOS (iPod Touch, iPhone y iPad), dónde cualquier persona con una tarjeta de crédito, débito o de prepago iTunes puede comprar/descargar programas informáticos vía internet; algunos se sorprenden y piensan que es la panacea de las aplicaciones; pero no es nada nuevo, GNU Linux y en particular Ubuntu tienen algo similar: los repositorios, obviamente a diferencia de Apple los de Cannonical son gratuitos y libres.

Mac App Store

Ahora bien, al igual que las aplicaciones de la App Store, las de la Mac App Store tienen un costo si resultan útiles y son de calidad, las gratuitas dejan mucho que desear o son simples demos; sin embargo leo en un blog muy popular de habla hispana que en el primer día de ventas se vendieron un millón de aplicaciones, si señor: 1,000,000 de programas, algunas de costo otras gratuitas. Supongamos que cada App cuesta en promedio (por aquello de compensar las gratuitas) 1 dólar, 12 pesos mexicanos, entonces durante el primer día Apple generó un negocio de 1 millón de dólares, si usted querido lector o yo generáramos un negocio de esa magnitud (un millón de productos vendidos o un millón de servicios ofrecidos), créame, para el tercer día estaríamos retirados en alguna playa del pacífico mexicano.

Eso nos da una idea interesante: hay gente dispuesta a pagar por software, desembolsar dinero por aplicaciones, tomar ese camino antes de comprar un programa pirata o “crackeado”; pero siempre y cuando sea de calidad. Yo mismo, acostumbrado al software libre he pagado varios dólares por aplicaciones de Apple porque son de calidad y útiles en la plataforma citada.

Pero si nos topamos con un producto inferior, de baja calidad y precio exorbitante como los de cierta compañía del noroeste de E.U.A. entonces no nos critiquen por buscar caminos alternos, aunque se caiga en la ilegalidad. ¿Usted pagaría $150 dólares por Microsoft Office 2010?

          Windows vs. Linux   
Dónde doy clases por las mañanas acaban de instalar Ubuntu en reemplazo de Windows XP en los laboratorios de Informática, luego platicaré que tal les va a alumnos y catedráticos.

Platicando con un alumno sobre mi netbook y el sistema operativo instalado discutíamos los pros y contras de Windows, él fanático de Windows , yo de Linux aunque no me incomoda usar Windows. Éste caballero defendía con todo al sistema operativo de Gates, como nunca antes lo había visto en una persona, quizá sea la edad ó los problemas hormonales ...

El caso fue que recordé una anéctoda - chiste que ví no se dónde, y que vendría a completar la saga de Windows vs. Linux:

Un profesor que sólo usaba Windows descubre que uno de sus alumnos no tiene el mismo sistema operativo y empieza la discusión ...
-Vaya, y sino utilizas Windows ¿Que Sistema Operativo utilizas?
-GNU/Linux. -Respondió orgulloso-

El profesor, cuyos fanáticos oídos no podían dar crédito a algo así, exclamó:
-Pero hijo mío ¿qué pecado has cometido para utilizar tal porquería?
El alumno, muy tranquilo, le respondió:
-Mi padre es informático y usa SUSE Linux, mi madre es asesora en seguridad y usa Debian Linux y mi hermano estudia Física y utiliza Linux Mandrake, por eso yo también utilizo GNU/Linux! -remató orgulloso y convencido-
El profesor continúa:
- Bueno, -replicó irritado el profesor-, pero ese no es motivo para utilizar Linux. Tú no tienes porqué hacer lo que hacen tus padres.. Por ejemplo, si tu madre se prostituyese y se drogase todo el día, tu padre se tocara los huevos, bebiese como un cabrón y traficase con drogas y tu hermano atracase comercios, se jalara la pija y robase a abuelitas, entonces, ¿tú qué harías?
A lo que responde el alumno
- Seguramente instalaría Windows, y en su versión Vista

¡Qué poca madre! ...

          Episode 084 - Ubuntu 7.10 Gutsy Gibbon   

In this shortened episode: a brief discussion of my upgrades and installs of the newest release of Ubuntu Linux, 7.10 Gutsy Gibbon; Listener Tip on the bash shell's double-exclamation point history operator; email feedback.


          Episode 081 - Audio in Linux   

In this guest episode: Duncan Macneil discusses a variety of issues regarding audio in Linux, including device drivers, recording applications, and cleaning up audio files in Audacity; Listener Tip on a list of free shell accounts; email feedback.


          Episode 062 - Home Servers Part 8: Music Servers   

In this episode: Happy Mother's Day and congratulations to Pat Davila of The Linux Link Tech Show; US VOIP providers; a discussion of various ways to serve music files, from Samba, to SSHFS, GNUMP3d, Firefly Media Server (formerly known as mt-daapd), and a brief mention of Icecast; listener feedback.

Additional links:

GNUMP3d - UbuntuGeek, Ubuntu Forums, Ubuntu Guide

Firefly Media Server (mt-daapd) - Linux.com, Gentoo Wiki

Icecast - HowToForge, Gentoo Wiki


          Episode 061 - Home Servers Part 7: Simple Email Server   

In this episode: a discussion of how to set up a simple local imap email server using Getmail and Dovecot (additional information here, here, here, here, here, and here); audio and email feedback.


          Episode 056 - Home Servers Part 2: The Apache Web Server   

In this episode: VTC course on Ubuntu Linux; an overview of the Apache Web Server (additional documentation here, here, and here); audio and email listener feedback.


          Episode 051 - VNC   

In this episode: a discussion of how to use VNC to connect to a graphical desktop on a Linux, Windows, or Mac OS X machine using an encrypted SSH tunnel for security; a Listener Tip on modifying the GRUB menu.lst; three audio feedbacks on Linux in schools.

Additional links: some original documentation from AT&T, Linux Planet article, VNC Linux-Windows document, Ubuntu documentation, more Ubuntu documentation, Gentoo Wiki article on VNC, Gentoo Wiki article on X11VNC, and other documentation regarding VNC on Windows and Linux.

Extra notes are located here.


          Episode 049 - GNU Screen   

In this episode: a few miscellaneous items, such as testing some BSD's, installing Ubuntu Edgy on a Thinkpad t42, and setting up a new server on an old P3 750mhz machine; a discussion of the basics of using GNU Screen (additional tips are here and here); a listener tip on Wine; lots of great feedback including some audio comments on the Linux in schools issue.


          Episode 047 - OpenPGP   

In this episode: a discussion of OpenPGP, GnuPG, and how to use public-key cryptography to sign and encrypt emails and files (here are some excellent how-to's: GnuPG mini Howto, Gentoo Documentation on GnuPG, and Ubuntu Documentation on GnuPG); an audio Listener Tip on the "cal" command; audio and email Listener Feedback.


          Episode 037 - SSH   

In this episode: Linux Reality server move; my initial impressions of the Release Candidate of Ubuntu Edgy Eft; a discussion of OpenSSH with an emphasis on ssh, scp, ssh-keygen, public/private key authentication, and dynamic port forwarding (additional link to PuTTY, a Windows SSH client); a Listener Tip on the Flock web browser; listener feedback.

Extra notes are located here.


          Episode 029 - Printer Networking   

In this episode: listener feedback, two Listener Tips, a review and discussion of CUPS and how to connect local and networked printers to and from Linux and Windows. Additional resources are here, here, and here.

Extra notes are located here.


          Episode 020 - Ubuntu Linux 6.06 Part 2   

In this episode: three audio feedbacks; new segment “Listener Tips”; Listener Tips from Anita of LinuxBasics.org regarding exporting a partition table to a text file and using a rescue disk; a walk-through of an installation of Ubuntu Linux 6.06 “Dapper Drake” onto a hard drive; a discussion of two scripts created by the Ubuntu community, Automatix and Easy Ubuntu, that help Ubuntu users install a variety of third-party free and non-free software.


          Episode 019 - Ubuntu Linux 6.06 Part 1   

In this episode: listener feedback, including how to install flash and java in DSL; downloading and booting Ubuntu Linux 6.06 “Dapper Drake”; a discussion of the Ubuntu GNOME desktop environment, including a look at Nautilus, the GNOME file manager; a review of how to install additional packages from the Ubuntu Add/Remove Applications tool and the Synaptic package manager.


          Episode 017 - SUSE Linux 10.1 Part 2   

In this episode: a brief mention of Ubuntu Dapper Drake; audio feedback; a look at the SUSE YaST configuration tool; how to fix the SUSE Linux 10.1 package management problems with the Smart package manager using the packages provided by a SUSE developer; additional resources on how to enable Smart in SUSE Linux 10.1 are here, here, and here; Serenity charity screenings. Note: in this episode, I forgot to mention that you should run “/sbin/SuSEconfig” and then “/sbin/ldconfig” in a terminal as your root user after installing the Smart packages.


          Episode 006 - Linux ISOs   

In this episode: listener feedback including the first audio comment; a discussion of Linux ISO files; purchasing retail boxes of Linux distributions such as SUSE and Mandriva; purchasing ISO images for a nominal fee from third parties such as OSDisc.com, LinuxCD.org, CheapISO.com, and LinuxISO.org; downloading ISO images for free directly from distributions such as Ubuntu and PCLinuxOS; LQ ISOs, a LinuxQuestions.org service that contains links to download locations of most Linux distributions all in one convenient place.


          Episode 005 - Version Numbering   

In this episode: over 100 pins on the LR Frappr map; international Linux adoption; listener feedback; my two favorite beers; version numbering as it applies to the Linux kernel and Linux distributions; how the movie Toy Story is relevant to the Debian GNU/Linux distribution; Ubuntu naming and numbering conventions.


          Episode 004 - Overview of Linux Distributions   

In this episode: the Linux Reality Frappr map; site forums; listener feedback; a return trip to Distrowatch for an overview of various Linux distributions, including Ubuntu, SUSE, Mandriva, MEPIS, Debian, Kubuntu, KNOPPIX, and PCLinuxOS; a brief discussion of Linux desktop environments, including KDE and GNOME.


          Primi passi con Sinatra   

Ho iniziato a studiare lo sviluppo di applicazioni web con Ruby. Volevo qualcosa di più leggero di Rails per riuscire a padroneggiarlo in meno tempo così la scelta è ricaduta su Sinatra.

Di seguito i passi da fare per poter lavorare su Windows con una macchina virtuale Ubuntu:

  • Scaricare VirtualBox per windows da qui
  • Scaricare l'iso dell'installazione di Ubuntu da qui. Ho scaricato l'ultima versione 11.10 a 32 bit.
  • Da VirtualBox click su Nuova. Poi scegliere Linux e Ubuntu e dare un nome alla macchina virtuale (io ho scelto ubuntu-11.10)
  • Come memoria ho impostato 1024Mb. Per il resto ho lasciato le impostazioni di default.
  • Click su Avvia. Poi click sul simbolo della cartella e ho scelto il file appena scaricato dell'immagine di Ubuntu.
  • Per l'installazione di Ubuntu ho lasciato tutti i default tranne il check di scaricare gli update.
  • Una volta ravviata la macchina virtuale. Installare le Guest Additions (Menu di VirtualBox Dispositivi -> Installa Guest Additions..). A me le ha installate automaticamente. Per verificare che siano installate dal menu Visualizza -> Adatta alla dimensione della finestra deve essere abilitato.

Ora possiamo installare Ruby

  • Installiamo l'ultima versione (la 1.9.3) usando RVM
  • Aprire Terminal e digitare il comando sudo apt-get update
  • Poi installiamo curl con il comando: sudo apt-get curl
  • Ora eseguiamo il comando: bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ). Non riuscirà ad installarlo perché mancano delle dipendenze. L'output del comando ci dirà cosa installare.
  • Copiare quindi il comando: apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion
  • Rilanciare: bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer )
  • Ora chiudere il terminale e riaprirlo
  • Eseguire: rvm --default use 1.9.3
  • se tutto è andato bene scrivendo ruby --version dovrebbe venire fuori che si sta usando la 1.9.3
  • Ora siamo pronti per installare sinatra con: gem install sinatra
  • Infine installiamo il webserver consigliato dal readme di Sinatra: gem install thin

A questo punto siamo pronti a scrivere la prima applicazione che usa Sinatra. Apriamo Text Editor e creiamo il file myapp.rb con il seguente contenuto:

require 'sinatra'

get '/' do
  'Hello world!'
end

 

Da terminal lanciamo il comando: ruby -rubygems myapp.rb. Partirà il webserver che si metterà in ascolto sulla porta 4567. Apriamo un browser e verifichiamo che il tutto funziona collegandoci all'url: locahost:4567.

Have fun!


          Sporadic DNS issues Ubuntu Mate 16.04 LTS   
Hi, in the past month or so I've started experiencing extremely frustrating DNS issues, whereby I often see a 'connecting message in the browser for 10 -30 seconds before a site loads, and also often have connection timeouts. After much googling I tried this suggestion ---Quote--- [TR]...
          New systemd Vulnerability Affects Ubuntu 17.04 and Ubuntu 16.10, Update Now   
New systemd Vulnerability Affects Ubuntu 17.04 and Ubuntu 16.10, Update Now Canonical informs Ubuntu users that it updated the systemd packages in the Ubuntu 16.10 (Yakkety Yak) and Ubuntu 17.04 (Zesty Zapus) operating systems to patch a recently discovered security issue. The new systemd vulnerability (CVE-2017-9445) appears to affect the systemd-resolved component, which could allow a remote attacker to crash the systemd daemon by causing a denial of service or run malicious programs on the vulnerable, unpatched machines by using a specially crafted DNS response. "In systemd through 233, certain sizes passed to dns_packet_new in systemd-resolved can cause it to allocate a buffer that's too small. A malicious DNS server can exploit this via a response with a specially crafted TCP payload to trick systemd-resolved into allocating a buffer that's too small, and subsequently write arbitrary data beyond the end of it," reads Canonical's Reported by Softpedia 30 minutes ago.
          7-Inch Windows Laptop / Ubuntu Laptop GPD Pocket 7 w/ 8GB RAM Release For Non-Crowdfunders   
The GPD Pocket 7 is a 7-inch Windows laptop and Ubuntu laptop that was crowdfunded during the spring to great success. The first backers in the crowdfunding have now received their units from the first round of release already, but now the GPD Pocket is taking regular orders in stores too, ahead of the new batch release coming up soon: http://promotion.geekbuying.com/promotion/gpd_pocket A lot of peope have called this the world’s … more...
          Stéphane Graber: LXD client on Windows and MacOS   

LXD logo

LXD on other operating systems?

While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

Setting up your LXD daemon

We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

This can be done with:

lxc config set core.https_address "[::]:8443"
lxc config set core.trust_password "my-password"

In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

Windows client

Pre-built native binaries

Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

Build from source

Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

And then in a command prompt, run:

git config --global http.https://gopkg.in.followRedirects true
go get -v -x github.com/lxc/lxd/lxc

Use Ubuntu on Windows (“bash”)

For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
With that done, start an Ubuntu shell by launching “bash”. And you’re done.
The LXD client is installed by default in the Ubuntu 16.04 image.

Interact with the remote server

Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

MacOS client

Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

Build from source

Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

Once that’s done, open a new Terminal window and run:

export GOPATH=~/go
go get -v -x github.com/lxc/lxd/lxc
sudo ln -s ~/go/bin/lxc /usr/local/bin/

At which point you can use the “lxc” command.

Conclusion

The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it


          Dustin Kirkland: Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for Linux.com   

I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
  1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
  2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
  3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
  4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
  5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
  6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:
Cheers,
Dustin
          Dustin Kirkland: HOWTO: Ubuntu on Windows   
As announced last week, Microsoft and Canonical have worked together to bring Ubuntu's userspace natively into Windows 10.

As of today, Windows 10 Insiders can now take Ubuntu on Windows for a test drive!  Here's how...

1) You need to have a system running today's 64-bit build of Windows 10 (Build 14316).


2) To do so, you may need to enroll into the Windows Insider program here, insider.windows.com.


3) You need to notify your Windows desktop that you're a Windows Insider, under "System Settings --> Advanced Windows Update options"


4) You need to set your update ambition to the far right, also known as "the fast ring".


5) You need to enable "developer mode", as this new feature is very pointedly directed specifically at developers.


6) You need to check for updates, apply all updates, and restart.


7) You need to turn on the new Windows feature, "Windows Subsystem for Linux (Beta)".  Note (again) that you need a 64-bit version of Windows!  Without that, you won't see the new option.


8) You need to reboot again.  (Windows sure has a fetish for rebooting!)


9) You press the start button and type "bash".


10) The first time you run "bash.exe", you'll accept the terms of service, download Ubuntu, and then you're off and running!



If you screw something up, and you want to start over, simply open a Windows command shell, and run: lxrun /uninstall /full and then just run bash again.

For bonus points, you might also like to enable the Ubuntu monospace font in your console.  Here's how!

a) Download the Ubuntu monospace font, from font.ubuntu.com.


b) Install the Ubuntu monospace font, by opening the zip file you downloaded, finding UbuntuMono-R.ttf, double clicking on it, and then clicking Install.


c) Enable the Ubuntu monospace font for the command console in the Windows registry.  Open regedit and find this key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont and add a new string value name "000" with value data "Ubuntu Mono"




d) Edit your command console preferences to enable the Ubuntu monospace font.

Cheers!
Dustin
          Dustin Kirkland: Still have questions about Bash and Ubuntu on Windows?   
Still have questions about Ubuntu on Windows?
Watch this Channel 9 session, recorded live at Build this week, hosted by Scott Hanselman, with questions answered by Windows kernel developers Russ Alexander, Ben Hillis, and myself representing Canonical and Ubuntu!

For fun, watch the crowd develop in the background over the 30 minute session!

And here's another recorded session with a demo by Rich Turner and Russ Alexander.  The real light bulb goes off at about 8:01.


Cheers,
:-Dustin
          Manuel de la Pena: Ignoring system folders when doing an os.listdir   

Recently a very interesting bug has been reported agains Ubuntu One on Windows. Apparently we try to sync a number of system folders that are present on Windows 7 to be backward compatible.

The problem

The actual problem in the code is that we are using os.listdir. While lisdir on python does return system folders (at the end of the day, they are there) os.walk does not, for example, lets imaging hat we have the following scenario:

Documents
    My Pictures (System folder)
    My Videos (System folder)
    Random dir
    Random Text.txt

If we run os.listdir we would have the following:

import os
>> os.listdir('Documents')
['My Pictures', 'My Videos', 'Random dir', 'Random Text.txt']

While if we use os.walk we have:

import os
path, dirs, files = os.walk('Documents')
print dirs
>> ['Random dir']
print files
>> ['Random Text.txt']

The fix is very simple, simply filter the result from os.listdir using the following function:

import win32file
 
INVALID_FILE_ATTRIBUTES = -1
 
 
def is_system_path(path):
    """Return if the function is a system path."""
    attrs = win32file.GetFileAttributesW(path)
    if attrs == INVALID_FILE_ATTRIBUTES:
        return False
    return win32file.FILE_ATTRIBUTE_SYSTEM & attrs ==\
        win32file.FILE_ATTRIBUTE_SYSTEM

File system events

An interesting question to ask after the above is, how does ReadDirectoryChangesW work with systen directories? Well, thankfully it works correctly. What does that mean? Well, it means the following:

  • Changes in the system folders do not get notified.
  • Moves from a watch directory to a system folder is not a MOVE_TO, MOVE_FROM couple but a FILE_DELETED

The above means that if you have a system folder in a watch path you do not need to worry since the events will work correctly, which are very very good news.


          Manuel de la Pena: Python os.path.expanduser on Windows with Japanese users bug   

Two really good developers, Alecu and Diego, have discovered a very interestning bug in the os.path.expanduser function in Python. If you have a user in your Windows machine with a name hat uses Japanese characters like “??????” you will have the following in your system:

  • The Windows Shell will show the path correctly, that is: “C:\Users\??????”
  • cmd.exe will show: “C:\Users\??????”
  • All the env variables will be wrong, which means they will be similar to the info shown in cmd.exe

The above is clearly a problem, specially when the implementation of os.path.expanduser on Winodws is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def expanduser(path):
    """Expand ~ and ~user constructs.
 
    If user or $HOME is unknown, do nothing."""
    if path[:1] != '~':
        return path
    i, n = 1, len(path)
    while i < n and path[i] not in '/\\':
        i = i + 1
 
    if 'HOME' in os.environ:
        userhome = os.environ['HOME']
    elif 'USERPROFILE' in os.environ:
        userhome = os.environ['USERPROFILE']
    elif not 'HOMEPATH' in os.environ:
        return path
    else:
        try:
            drive = os.environ['HOMEDRIVE']
        except KeyError:
            drive = ''
        userhome = join(drive, os.environ['HOMEPATH'])
 
    if i != 1: #~user
        userhome = join(dirname(userhome), path[1:i])
 
    return userhome + path[i:]

For the time being my proposed fix for Ubuntu One is to do the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import ctypes
from ctypes import windll, wintypes
 
class GUID(ctypes.Structure):
    _fields_ = [
         ('Data1', wintypes.DWORD),
         ('Data2', wintypes.WORD),
         ('Data3', wintypes.WORD),
         ('Data4', wintypes.BYTE * 8)
    ]
    def __init__(self, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8):
        """Create a new GUID."""
        self.Data1 = l
        self.Data2 = w1
        self.Data3 = w2
        self.Data4[:] = (b1, b2, b3, b4, b5, b6, b7, b8)
 
    def __repr__(self):
        b1, b2, b3, b4, b5, b6, b7, b8 = self.Data4
        return 'GUID(%x-%x-%x-%x%x%x%x%x%x%x%x)' % (
                   self.Data1, self.Data2, self.Data3, b1, b2, b3, b4, b5, b6, b7, b8)
 
# constants to be used according to the version on shell32
CSIDL_PROFILE = 40
FOLDERID_Profile = GUID(0x5E6C858F, 0x0E22, 0x4760, 0x9A, 0xFE, 0xEA, 0x33, 0x17, 0xB6, 0x71, 0x73)
 
def expand_user():
    # get the function that we can find from Vista up, not the one in XP
    get_folder_path = getattr(windll.shell32, 'SHGetKnownFolderPath', None)
 
    if get_folder_path is not None:
        # ok, we can use the new function which is recomended by the msdn
        ptr = ctypes.c_wchar_p()
        get_folder_path(ctypes.byref(FOLDERID_Profile), 0, 0, ctypes.byref(ptr))
        return ptr.value
    else:
        # use the deprecated one found in XP and on for compatibility reasons
       get_folder_path = getattr(windll.shell32, 'SHGetSpecialFolderPathW', None)
       buf = ctypes.create_unicode_buffer(300)
       get_folder_path(None, buf, CSIDL_PROFILE, False)
       return buf.value

The above code ensure that we only use SHGetFolderPathW when SHGetKnownFolderPathW is not available in the system. The reasoning for that is that SHGetFolderPathW is deprecated and new applications are encourage to use SHGetKnownFolderPathW.

A much better solution is to patch ntpath.py so that is something like what I propose for Ubuntu One. Does anyone know if this is fixed in Python 3? Shall I propose a fix?

PS: For ref I got the GUI value from here.


          Manuel de la Pena: Performing autoupdates with Bitrock   

On Ubuntu One we use BtiRock to create the packages for Windows. One of the new features I’m working on is to check if there are updates every so often so that the user gets notified. This code on Ubuntu is not needed because the Update Manger takes care of that, but when you work in an inferior OS…

Generate the auto-update.exe

In order to check for updates we use the generated auto-update.exe wizard from BitRock. Generating the wizard is very straight forward first, as with most of the BitRock stuff, we generate the XML to configure the generated .exe.

<autoUpdateProject>
    <fullName>Ubuntu One</fullName>
    <shortName>ubuntuone</shortName>
    <vendor>Canonical</vendor>
    <version>201</version>
    <singleInstanceCheck>1</singleInstanceCheck>
    <requireInstallationByRootUser>0</requireInstallationByRootUser>
    <requestedExecutionLevel>asInvoker</requestedExecutionLevel>
</autoUpdateProject>

There is just a single thing that is worth mentioning about the above XML. The requireInstallationByRootUser is true because we use the generated .exe to check if there are updates present and we do not what the user to have to be root for that, it does not make sense. Once you have the above or similar XML you can execute:

{$bitrock_installation$}\autoupdate\bin\customize.exe" build ubuntuone_autoupdate.xml windows

Which generates the .exe (the output is in ~\Documents\AutoUpdate\output).

Putting it together with Twisted

The following code provides an example of a couple of functions that can be used by the application, first to check for an update, and to perform the actual update.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import os
import sys
 
# Avoid pylint error on Linux
# pylint: disable=F0401
import win32api
# pylint: enable=F0401
 
from twisted.internet import defer
from twisted.internet.utils import getProcessValue
 
AUTOUPDATE_EXE_NAME = 'autoupdate-windows.exe'
 
def _get_update_path():
    """Return the path in which the autoupdate command is found."""
    if hasattr(sys, "frozen"):
        exec_path = os.path.abspath(sys.executable)
    else:
        exec_path = os.path.dirname(__file__)
    folder = os.path.dirname(exec_path)
    update_path = os.path.join(folder, AUTOUPDATE_EXE_NAME)
    if os.path.exists(update_path):
        return update_path
    return None
 
 
@defer.inlineCallbacks
def are_updates_present(logger):
    """Return if there are updates for Ubuntu One."""
    update_path = _get_update_path()
    logger.debug('Update path %s', update_path)
    if update_path is not None:
        # If there is an update present we will get 0 and other number
        # otherwise
        retcode = yield getProcessValue(update_path, args=('--mode',
            'unattended'), path=os.path.dirname(update_path))
        logger.debug('Return code %s', retcode)
        if retcode == 0:
            logger.debug('Returning True')
            defer.returnValue(True)
    logger.debug('Returning False')
    defer.returnValue(False)
 
 
def perform_update():
    """Spawn the autoupdate process and call the stop function."""
    update_path = _get_update_path()
    if update_path is not None:
        # lets call the updater with the commands that are required,
        win32api.ShellExecute(None, 'runas',
            update_path,
            '--unattendedmodeui none', '', 0)

With the above you should be able to easily update the installation of your frozen python app on Windows when using BitRock.


          Manuel de la Pena: Using a Deny ACE on Windows for readonly paths   

On of the features that I really like from Ubuntu One is the ability to have Read Only shares that will allow me to share files with some of my friends without them having the chance to change my files. In order to support that in a more explicit way on Windows we needed to be able to change the ACEs of an ACL from a file to stop the user from changing the files. In reality there is no need to change the ACEs since the server will ensure that the files are not changed, but as with python, is better to be explicit that to be implicit.

Our solution has the following details:

  • The file system is not using FAT.
  • We assume that the average user does not change the ACEs of a file usually.
  • If the user changes the ACEs he does not add any deny ACE.
  • We want to keep the already present ACEs.

The idea is very simple, we will add a ACE for the path that will remove the user the write rights so that we cannot edit/rename/delete a file and that he can only list the directories. The full code is the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
USER_SID = LookupAccountName("", GetUserName())[0]
 
def _add_deny_ace(path, rights):
    """Remove rights from a path for the given groups."""
    if not os.path.exists(path):
        raise WindowsError('Path %s could not be found.' % path)
 
    if rights is not None:
        security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
        dacl = security_descriptor.GetSecurityDescriptorDacl()
        # set the attributes of the group only if not null
        dacl.AddAccessDeniedAceEx(ACL_REVISION_DS,
                CONTAINER_INHERIT_ACE | OBJECT_INHERIT_ACE, rights,
                USER_SID)
        security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
        SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)
 
 
def _remove_deny_ace(path):
    """Remove the deny ace for the given groups."""
    if not os.path.exists(path):
        raise WindowsError('Path %s could not be found.' % path)
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = security_descriptor.GetSecurityDescriptorDacl()
    # if we delete an ace in the acl the index is outdated and we have
    # to ensure that we do not screw it up. We keep the number of deleted
    # items to update accordingly the index.
    num_delete = 0
    for index in range(0, dacl.GetAceCount()):
        ace = dacl.GetAce(index - num_delete)
        # check if the ace is for the user and its type is 1, that means
        # is a deny ace and we added it, lets remove it
        if USER_SID == ace[2] and ace[0][0] == 1:
            dacl.DeleteAce(index - num_delete)
            num_delete += 1
    security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
    SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)
 
 
def set_no_rights(path):
    """Set the rights for 'path' to be none.
 
    Set the groups to be empty which will remove all the rights of the file.
 
    """
    os.chmod(path, 0o000)
    rights = FILE_ALL_ACCESS
    _add_deny_ace(path, rights)
 
 
def set_file_readonly(path):
    """Change path permissions to readonly in a file."""
    # we use the win32 api because chmod just sets the readonly flag and
    # we want to have more control over the permissions
    rights = FILE_WRITE_DATA | FILE_APPEND_DATA | FILE_GENERIC_WRITE
    # the above equals more or less to 0444
    _add_deny_ace(path, rights)
 
 
def set_file_readwrite(path):
    """Change path permissions to readwrite in a file."""
    # the above equals more or less to 0774
    _remove_deny_ace(path)
    os.chmod(path, stat.S_IWRITE)
 
 
def set_dir_readonly(path):
    """Change path permissions to readonly in a dir."""
    rights = FILE_WRITE_DATA | FILE_APPEND_DATA
 
    # the above equals more or less to 0444
    _add_deny_ace(path, rights)
 
 
def set_dir_readwrite(path):
    """Change path permissions to readwrite in a dir.
 
    Helper that receives a windows path.
 
    """
    # the above equals more or less to 0774
    _remove_deny_ace(path)
    # remove the read only flag
    os.chmod(path, stat.S_IWRITE)

Adding the Deny ACE

The idea of the code is very simple, we will add a Deny ACE to the path so that the user cannot write it. The Deny ACE is different if it is a file or a directory since we want the user to be able to list the contents of a directory.

3
4
5
6
7
8
9
10
11
12
13
14
15
16
def _add_deny_ace(path, rights):
    """Remove rights from a path for the given groups."""
    if not os.path.exists(path):
        raise WindowsError('Path %s could not be found.' % path)
 
    if rights is not None:
        security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
        dacl = security_descriptor.GetSecurityDescriptorDacl()
        # set the attributes of the group only if not null
        dacl.AddAccessDeniedAceEx(ACL_REVISION_DS,
                CONTAINER_INHERIT_ACE | OBJECT_INHERIT_ACE, rights,
                USER_SID)
        security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
        SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)

Remove the Deny ACE

Very similar to the above but doing the opposite, lets remove the Deny ACES present for the current user. If you notice we store how many we removed, the reason is simple, if we remove an ACE the index is no longer valid so we have to calculate the correct one by knowing how many we removed.

19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def _remove_deny_ace(path):
    """Remove the deny ace for the given groups."""
    if not os.path.exists(path):
        raise WindowsError('Path %s could not be found.' % path)
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = security_descriptor.GetSecurityDescriptorDacl()
    # if we delete an ace in the acl the index is outdated and we have
    # to ensure that we do not screw it up. We keep the number of deleted
    # items to update accordingly the index.
    num_delete = 0
    for index in range(0, dacl.GetAceCount()):
        ace = dacl.GetAce(index - num_delete)
        # check if the ace is for the user and its type is 1, that means
        # is a deny ace and we added it, lets remove it
        if USER_SID == ace[2] and ace[0][0] == 1:
            dacl.DeleteAce(index - num_delete)
            num_delete += 1
    security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
    SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)

Implement access

Our access implementation takes into account the Deny ACE added to ensure that we do not only look at the flags.

def access(path):
    """Return if the path is at least readable."""
    # lets consider the access on an illegal path to be a special case
    # since that will only occur in the case where the user created the path
    # for a file to be readable it has to be readable either by the user or
    # by the everyone group
    # XXX: ENOPARSE ^ (nessita)
    if not os.path.exists(path):
        return False
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = security_descriptor.GetSecurityDescriptorDacl()
    for index in range(0, dacl.GetAceCount()):
        # add the sid of the ace if it can read to test that we remove
        # the r bitmask and test if the bitmask is the same, if not, it means
        # we could read and removed it.
        ace = dacl.GetAce(index)
        if USER_SID == ace[2] and ace[0][0] == 1:
            # check wich access is denied
            if ace[1] | FILE_GENERIC_READ == ace[1] or\
               ace[1] | FILE_ALL_ACCESS == ace[1]:
                return False
    return True

Implement can_write

The following code is similar to access but checks if we have a readonly file.

def can_write(path):
    """Return if the path is at least readable."""
    # lets consider the access on an illegal path to be a special case
    # since that will only occur in the case where the user created the path
    # for a file to be readable it has to be readable either by the user or
    # by the everyone group
    # XXX: ENOPARSE ^ (nessita)
    if not os.path.exists(path):
        return False
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = security_descriptor.GetSecurityDescriptorDacl()
    for index in range(0, dacl.GetAceCount()):
        # add the sid of the ace if it can read to test that we remove
        # the r bitmask and test if the bitmask is the same, if not, it means
        # we could read and removed it.
        ace = dacl.GetAce(index)
        if USER_SID == ace[2] and ace[0][0] == 1:
            if ace[1] | FILE_GENERIC_WRITE == ace[1] or\
               ace[1] | FILE_WRITE_DATA == ace[1] or\
               ace[1] | FILE_APPEND_DATA == ace[1] or\
               ace[1] | FILE_ALL_ACCESS == ace[1]:
                # check wich access is denied
                return False
    return True

And that is about it, I hope it helps other projects :D


          Manuel de la Pena: ReadDirectoryChangesW and Twisted   

Last week was probably one of the best coding sprints I have had since I started working in Canonical, I’m serious!. I had the luck to pair program with alecu on the FilesystemMonitor that we use in Ubuntu One on windows. The implementation has improved so much that I wanted to blog about it and show it as an example of how to hook the ReadDirectoryChangesW call from COM into twisted so that you can process the events using twisted which is bloody cool.

We have reduce the implementation of the Watch and WatchManager to match our needs and reduce the API provided since we do not use all the API provided by pyinotify. The Watcher implementation is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
class Watch(object):
    """Implement the same functions as pyinotify.Watch."""
 
    def __init__(self, watch_descriptor, path, mask, auto_add, processor,
        buf_size=8192):
        super(Watch, self).__init__()
        self.log = logging.getLogger('ubuntuone.SyncDaemon.platform.windows.' +
            'filesystem_notifications.Watch')
        self.log.setLevel(TRACE)
        self._processor = processor
        self._buf_size = buf_size
        self._wait_stop = CreateEvent(None, 0, 0, None)
        self._overlapped = OVERLAPPED()
        self._overlapped.hEvent = CreateEvent(None, 0, 0, None)
        self._watching = False
        self._descriptor = watch_descriptor
        self._auto_add = auto_add
        self._ignore_paths = []
        self._cookie = None
        self._source_pathname = None
        self._process_thread = None
        # remember the subdirs we have so that when we have a delete we can
        # check if it was a remove
        self._subdirs = []
        # ensure that we work with an abspath and that we can deal with
        # long paths over 260 chars.
        if not path.endswith(os.path.sep):
            path += os.path.sep
        self._path = os.path.abspath(path)
        self._mask = mask
        # this deferred is fired when the watch has started monitoring
        # a directory from a thread
        self._watch_started_deferred = defer.Deferred()
 
    @is_valid_windows_path(path_indexes=[1])
    def _path_is_dir(self, path):
        """Check if the path is a dir and update the local subdir list."""
        self.log.debug('Testing if path %r is a dir', path)
        is_dir = False
        if os.path.exists(path):
            is_dir = os.path.isdir(path)
        else:
            self.log.debug('Path "%s" was deleted subdirs are %s.',
                path, self._subdirs)
            # we removed the path, we look in the internal list
            if path in self._subdirs:
                is_dir = True
                self._subdirs.remove(path)
        if is_dir:
            self.log.debug('Adding %s to subdirs %s', path, self._subdirs)
            self._subdirs.append(path)
        return is_dir
 
    def _process_events(self, events):
        """Process the events form the queue."""
        # do not do it if we stop watching and the events are empty
        if not self._watching:
            return
 
        # we transform the events to be the same as the one in pyinotify
        # and then use the proc_fun
        for action, file_name in events:
            if any([file_name.startswith(path)
                        for path in self._ignore_paths]):
                continue
            # map the windows events to the pyinotify ones, tis is dirty but
            # makes the multiplatform better, linux was first :P
            syncdaemon_path = get_syncdaemon_valid_path(
                                        os.path.join(self._path, file_name))
            is_dir = self._path_is_dir(os.path.join(self._path, file_name))
            if is_dir:
                self._subdirs.append(file_name)
            mask = WINDOWS_ACTIONS[action]
            head, tail = os.path.split(file_name)
            if is_dir:
                mask |= IN_ISDIR
            event_raw_data = {
                'wd': self._descriptor,
                'dir': is_dir,
                'mask': mask,
                'name': tail,
                'path': '.'}
            # by the way in which the win api fires the events we know for
            # sure that no move events will be added in the wrong order, this
            # is kind of hacky, I dont like it too much
            if WINDOWS_ACTIONS[action] == IN_MOVED_FROM:
                self._cookie = str(uuid4())
                self._source_pathname = tail
                event_raw_data['cookie'] = self._cookie
            if WINDOWS_ACTIONS[action] == IN_MOVED_TO:
                event_raw_data['src_pathname'] = self._source_pathname
                event_raw_data['cookie'] = self._cookie
            event = Event(event_raw_data)
            # FIXME: event deduces the pathname wrong and we need to manually
            # set it
            event.pathname = syncdaemon_path
            # add the event only if we do not have an exclude filter or
            # the exclude filter returns False, that is, the event will not
            # be excluded
            self.log.debug('Event is %s.', event)
            self._processor(event)
 
    def _call_deferred(self, f, *args):
        """Executes the defeered call avoiding possible race conditions."""
        if not self._watch_started_deferred.called:
            f(args)
 
    def _watch(self):
        """Watch a path that is a directory."""
        # we are going to be using the ReadDirectoryChangesW whihc requires
        # a directory handle and the mask to be used.
        handle = CreateFile(
            self._path,
            FILE_LIST_DIRECTORY,
            FILE_SHARE_READ | FILE_SHARE_WRITE,
            None,
            OPEN_EXISTING,
            FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED,
            None)
        self.log.debug('Watching path %s.', self._path)
        while True:
            # important information to know about the parameters:
            # param 1: the handle to the dir
            # param 2: the size to be used in the kernel to store events
            # that might be lost while the call is being performed. This
            # is complicated to fine tune since if you make lots of watcher
            # you migh used too much memory and make your OS to BSOD
            buf = AllocateReadBuffer(self._buf_size)
            try:
                ReadDirectoryChangesW(
                    handle,
                    buf,
                    self._auto_add,
                    self._mask,
                    self._overlapped,
                )
                reactor.callFromThread(self._call_deferred,
                    self._watch_started_deferred.callback, True)
            except error:
                # the handle is invalid, this may occur if we decided to
                # stop watching before we go in the loop, lets get out of it
                reactor.callFromThread(self._call_deferred,
                    self._watch_started_deferred.errback, error)
                break
            # wait for an event and ensure that we either stop or read the
            # data
            rc = WaitForMultipleObjects((self._wait_stop,
                                         self._overlapped.hEvent),
                                         0, INFINITE)
            if rc == WAIT_OBJECT_0:
                # Stop event
                break
            # if we continue, it means that we got some data, lets read it
            data = GetOverlappedResult(handle, self._overlapped, True)
            # lets ead the data and store it in the results
            events = FILE_NOTIFY_INFORMATION(buf, data)
            self.log.debug('Events from ReadDirectoryChangesW are %s', events)
            reactor.callFromThread(self._process_events, events)
 
        CloseHandle(handle)
 
    @is_valid_windows_path(path_indexes=[1])
    def ignore_path(self, path):
        """Add the path of the events to ignore."""
        if not path.endswith(os.path.sep):
            path += os.path.sep
        if path.startswith(self._path):
            path = path[len(self._path):]
            self._ignore_paths.append(path)
 
    @is_valid_windows_path(path_indexes=[1])
    def remove_ignored_path(self, path):
        """Reaccept path."""
        if not path.endswith(os.path.sep):
            path += os.path.sep
        if path.startswith(self._path):
            path = path[len(self._path):]
            if path in self._ignore_paths:
                self._ignore_paths.remove(path)
 
    def start_watching(self):
        """Tell the watch to start processing events."""
        for current_child in os.listdir(self._path):
            full_child_path = os.path.join(self._path, current_child)
            if os.path.isdir(full_child_path):
                self._subdirs.append(full_child_path)
        # start to diff threads, one to watch the path, the other to
        # process the events.
        self.log.debug('Start watching path.')
        self._watching = True
        reactor.callInThread(self._watch)
        return self._watch_started_deferred
 
    def stop_watching(self):
        """Tell the watch to stop processing events."""
        self.log.info('Stop watching %s', self._path)
        SetEvent(self._wait_stop)
        self._watching = False
        self._subdirs = []
 
    def update(self, mask, auto_add=False):
        """Update the info used by the watcher."""
        self.log.debug('update(%s, %s)', mask, auto_add)
        self._mask = mask
        self._auto_add = auto_add
 
    @property
    def path(self):
        """Return the patch watched."""
        return self._path
 
    @property
    def auto_add(self):
        return self._auto_add

The important details of this implementations are the following:

Use a deferred to notify that the watch started.

During or tests we noticed that the start watch function was slow which would mean that from the point when we start watching the directory and the point when the thread actually started we would be loosing events. The function now returns a deferred that will be fired when the ReadDirectoryChangesW has been called which ensures that no events will be lost. The interesting parts are the following:

define the deferred

31
32
33
       # this deferred is fired when the watch has started monitoring
        # a directory from a thread
        self._watch_started_deferred = defer.Deferred()

Call the deferred either when we successfully started watching:

128
129
130
131
132
133
134
135
136
137
138
            buf = AllocateReadBuffer(self._buf_size)
            try:
                ReadDirectoryChangesW(
                    handle,
                    buf,
                    self._auto_add,
                    self._mask,
                    self._overlapped,
                )
                reactor.callFromThread(self._call_deferred,
                    self._watch_started_deferred.callback, True)

Call it when we do have an error:

139
140
141
142
143
144
            except error:
                # the handle is invalid, this may occur if we decided to
                # stop watching before we go in the loop, lets get out of it
                reactor.callFromThread(self._call_deferred,
                    self._watch_started_deferred.errback, error)
                break

Threading and firing the reactor.

There is an interesting detail to take care of in this code. We have to ensure that the deferred is not called more than once, to do that you have to callFromThread a function that will fire the event only when it was not already fired like this:

103
104
105
106
    def _call_deferred(self, f, *args):
        """Executes the defeered call avoiding possible race conditions."""
        if not self._watch_started_deferred.called:
            f(args)

If you do not do the above, but the code bellow you will have a race condition in which the deferred is called more than once.

            buf = AllocateReadBuffer(self._buf_size)
            try:
                ReadDirectoryChangesW(
                    handle,
                    buf,
                    self._auto_add,
                    self._mask,
                    self._overlapped,
                )
                if not self._watch_started_deferred.called:
                    reactor.callFromThread(self._watch_started_deferred.callback, True)
            except error:
                # the handle is invalid, this may occur if we decided to
                # stop watching before we go in the loop, lets get out of it
                if not self._watch_started_deferred.called:
                    reactor.callFromThread(self._watch_started_deferred.errback, error)
                break

Execute the processing of events in the reactor main thread.

Alecu has bloody great ideas way too often, and this is one of his. The processing of the events is queued to be executed in the twisted reactor main thread which reduces the amount of threads we use and will ensure that the events are processed in the correct order.

153
154
155
156
157
158
            # if we continue, it means that we got some data, lets read it
            data = GetOverlappedResult(handle, self._overlapped, True)
            # lets ead the data and store it in the results
            events = FILE_NOTIFY_INFORMATION(buf, data)
            self.log.debug('Events from ReadDirectoryChangesW are %s', events)
            reactor.callFromThread(self._process_events, events)

Just for this the flight to Buenos Aires was well worth it!!! For anyone to see the full code feel free to look at ubuntuone.platform.windows from ubuntuone.


          Manuel de la Pena: Exasperated by the Windows filesystem   

At the moment some of the tests (and I cannot point out which ones) of ubuntuone-client fail when they are ran on Windows. The reason for this is due to the way in which we get the notifications out of the file system and the way the tests are written. Before I blame the OS or the tests, let me explain a number of facts about the Windows filesystem and the possible ways to interact with it.

To be able to get file system changes from the OS the Win32 API provides the following:

SHChangedNotifyRegister

This function was broken up to Vista when it was fixed, Unfortunately AFAIK we also support Windows XP which means that we cannot trust this function. On top of that taking this path means that we can have a performance issue. Because the function is build on top of Windows messages, if too many changes occur the sync daemon would start receiving roll up messages that just state that something changed and it would be up to the sync daemon to decide what really happened. Therefore we can all agree that this is a no no, right?

FindFirstChangeNotification

This is a really easy function to use which is based on ReadDirectoryChangesW (I think is a simple wrapper around it) that lets you know that something changed but gives no information about what changed. Because if is based on ReadDirectoryChangesW it suffers from the same issues.

ReadDirectoryChangesW

This is by far the most common way to get the notification changes from the system. Now, in theory there are two possible cases which can go wrong that would affect the events raised by this function:

  1. There are too many events and the buffer gets overloaded and we start loosing events. A simple way to solve this issues is to process the events in a diff thread asap so that we can keep reading the changes.
  2. We use the sync version of the function which means that we could have the following issues:
    • Blue screen of death because we used too much memory from the kernel space.
    • We cannot close the handles used to watch the changes in the directories. This makes the threads to end up blocked.

As I mentioned this is the theory and therefore makes perfect sense to choose this option as the way to get notified by the changes until… you hit a great little feature of Windows called write-behind caching. The idea of write-behind caching is the following one:

When you attempt to write a new file on your HD Windows does not directly modify the HD. Instead it makes a not of the fact that your intention is to write on disk and saves your changes in memory. Ins’t that smart?

Well, that lovely feature does come set as default AFAIK from XP onwards. Any smart person would wonder how does that interact with FindFirstChangeNotification/ReadDirectoryChangesW, well after some work here is what I have managed to find out:

The IO Manager (internal to the kernel) is queueing up disk-write requests in an internal buffer, and the actual changes are not physically committed until some condition is met which I believe is for the “write-behind caching” feature. The problem appears to be that the user-space callback via FileSystemWatcher/ReadDirectoryChanges does not occur when disk-write requests are inserted into the queue, but rather occurs when they are leaving the queue and being physically committed to disk. For what I have been able to manage through observation, the life time of a queue is based on:

  1. Whether more writes are being inserted in the q.
  2. Is another app request a read from an item in the q.

This means that when using FileSystemWatcher/ReadDirectoryChanges the events are fired only when the changes are actually committed and as for a user-space program this follows a non-deterministic process (insert spanish swearing here). a way to work around this issue is to use the FluxhFileBuffers function on the volume, which does need admin rights, yeah!

Change Journal records

Well, this allows to track the changes that have been committed in an NTFS system (that means that we do not have support to FAT). This technique allows to keep track of the changes using an update sequence number that keeps track of changes in an interesting manner. At first look, although parsing the data is hard, this solution seems to be very similar to the one used by pyinotify and therefore someone will say, hey, let just ell twisted to do a select on that file and read the changes. Well, no, is not that easy, files do not provide the functionality used for select, just sockets (http://msdn.microsoft.com/en-us/library/aa363803%28VS.85%29.aspx) /me jumps of happiness

File system filterr

Well, this is an easy one to summarize, you have to write a driver like piece of code. Means C, COM and being able to crash the entire system with a nice blue screen (although I can change the color to aubergine before we crash)

Conclusion

At this point I hope I have convinced a few to believe that ReadDirectoryChangesW is the best option to take but might be wondering why I mentioned the write-behind caching feature, well here comes my complain towards the tests. We do use the real file system notifications for testing and the trial test cases do have a timeout! Those two facts plus the lovely write-behind caching feature mean that the tests on Windows fail just because the bloody evens are not raise until the leave the q from the IO manager.


          Manuel de la Pena: Bitten in the ass by ReadDirectoryChangesW and multithreading   

During the past few days I have been trying to track down an issue in the Ubuntu One client tests when ran on Windows that would use all the threads that the python process could have. As you can imaging finding out why there are deadlocks is quite hard, specially when I though that the code was thread safe, guess what? it wasn’t

The bug I had in the code was related to the way in which ReadDirectoryChangesW works. This functions has two different ways to be executed:

Synchronous

The ReadDirectoryChangesW can be executed in a sync mode by NOT providing a OVERLAPPED structure to perform the IO operations, for example:

def _watcherThread(self, dn, dh, changes):
        flags = win32con.FILE_NOTIFY_CHANGE_FILE_NAME
        while 1:
            try:
                print "waiting", dh
                changes = win32file.ReadDirectoryChangesW(dh,
                                                          8192,
                                                          False,
                                                          flags)
                print "got", changes
            except:
                raise
            changes.extend(changes)

The above example has the following two problems:

  • ReadDirectoryChangesW without an OVERLAPPED blocks infinitely.
  • If another thread attempts to close the handle while ReadDirectoryChangesW is waiting on it, the CloseHandle() method blocks (which has nothing to do with the GIL – it is correctly managed)

I got bitten in the ass by the second item which broke my tests in two different ways since it let thread block and a Handle used so that the rest of the tests could not remove the tmp directories that were under used by the block threads.

Asynchronous

In other to be able to use the async version of the function we just have to use an OVERLAPPED structure, this way the IO operations will no block and we will also be able to close the handle from a diff thread.

def _watcherThreadOverlapped(self, dn, dh, changes):
        flags = win32con.FILE_NOTIFY_CHANGE_FILE_NAME
        buf = win32file.AllocateReadBuffer(8192)
        overlapped = pywintypes.OVERLAPPED()
        overlapped.hEvent = win32event.CreateEvent(None, 0, 0, None)
        while 1:
            win32file.ReadDirectoryChangesW(dh,
                                            buf,
                                            False, #sub-tree
                                            flags,
                                            overlapped)
            # Wait for our event, or for 5 seconds.
            rc = win32event.WaitForSingleObject(overlapped.hEvent, 5000)
            if rc == win32event.WAIT_OBJECT_0:
                # got some data!  Must use GetOverlappedResult to find out
                # how much is valid!  0 generally means the handle has
                # been closed.  Blocking is OK here, as the event has
                # already been set.
                nbytes = win32file.GetOverlappedResult(dh, overlapped, True)
                if nbytes:
                    bits = win32file.FILE_NOTIFY_INFORMATION(buf, nbytes)
                    changes.extend(bits)
                else:
                    # This is "normal" exit - our 'tearDown' closes the
                    # handle.
                    # print "looks like dir handle was closed!"
                    return
            else:
                print "ERROR: Watcher thread timed-out!"
                return # kill the thread!

Using the ReadDirectoryW function in this way does solve all the other issues that are found on the sync version and the only extra overhead added is that you need to understand how to deal with COM events which is not that hard after you have worked with it for a little.

I leave this here for people that might find the same issue and for me to remember how much my ass hurt.

References


          Manuel de la Pena: Py2exe: Adding pkg_resources required by dependencies   

One of the things we wanted to achieve for the Windows port of Ubuntu One was to deploy to the users systems .exe files rather than requiring them to have python and all the different dependencies installed in their machine. There are different reasons we wanted to do this, but this post is not related to that. The goal of this post is to explain what to do when you are using py2exe and you depend on a package such as lazr.restfulclient.

Why lazr.restfulclient?

There are different reasons why I’m using lazr.restfulclient as an example:

  • It is a dependency we do have on Ubuntu One, and therefore I already have done the work with it.
  • It uses two features of setuptools that do not play well with py2exe:
    • It uses namespaced packages.
    • I uses pkg_resources to load resources used for the client.

Working around the use of namedspaced packages

This is actually a fairly easy thing to solve and it is well documented in the py2exe wiki, nevertheless I’d like to show it in this post so that the inclusion of the lazr.restfulclient is complete.

The main issue with namedspaced packages is that you have to tell the module finder from py2exe where to find those packages, which in our example are lazr.authentication, lazr.restfulclient and lazr.uri. A way to do that would be the following:

import lazr
try:
    import py2exe.mf as modulefinder
except ImportError:
    import modulefinder
 
for p in lazr.__path__:
        modulefinder.AddPackagePath(__name__, p)

Adding the lazr resources

This is a more problematic issue to solve since we have to work around a limitation found in py2exe. The lazr.restfulcient tries to load a resource from the py2exe library.zip but as the zipfile is reserved for compiled files, and therefore the module fails. In py2exe there is no way to state that those resource files have to be copied to the library.zip which would mean that an error is raised at runtime when trying to use the lib but not at build time.

The best way (if not the only one) to solve this is to extend the py2exe command to copy the resource files to the folders that are zipped before they are embedded, that way pkg_resource will be able to load the file with no problems.

import os
import glob
import lazr.restfulclient
from py2exe.build_exe import py2exe as build_exe
 
class LazrMediaCollector(build_exe):
    """Extension that copies lazr missing data."""
 
    def copy_extensions(self, extensions):
        """Copy the missing extensions."""
        build_exe.copy_extensions(self, extensions)
 
        # Create the media subdir where the
        # Python files are collected.
        media = os.path.join('lazr', 'restfulclient')
        full = os.path.join(self.collect_dir, media)
        if not os.path.exists(full):
            self.mkpath(full)
 
        # Copy the media files to the collection dir.
        # Also add the copied file to the list of compiled
        # files so it will be included in zipfile.
        for f in glob.glob(lazr.restfulclient.__path__[0] + '/*.txt'):
            name = os.path.basename(f)
            self.copy_file(f, os.path.join(full, name))
            self.compiled_files.append(os.path.join(media, name))

In order to use the above command class to perform the compilation you simply have to tell setup tools which command class to use.

cmdclass = {'py2exe' : LazrMediaCollector}

With the above done, you can use the usual ‘python setup.py install py2exe’. Now, the question for the Internet, can this be done with Pyinstaller?


          Manuel de la Pena: A look alike pyinotify for Windows   

Before I introduce the code, let me say that this is not a 100% exact implementation of the interfaces that can be found in pyinotify but the implementation of a subset that matches my needs. The main idea of creating this post is to give an example of the implementation of such a library for Windows trying to reuse the code that can be found in pyinotify.

Once I have excused my self, let get into the code. First of all, there are a number of classes from pyinotify that we can use in our code. That subset of classes is the below code which I grabbed from pyinotify git:

#!/usr/bin/env python
 
# pyinotify.py - python interface to inotify
# Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""Platform agnostic code grabed from pyinotify."""
import logging
import os
 
COMPATIBILITY_MODE = False
 
 
class RawOutputFormat:
    """
    Format string representations.
    """
    def __init__(self, format=None):
        self.format = format or {}
 
    def simple(self, s, attribute):
        if not isinstance(s, str):
            s = str(s)
        return (self.format.get(attribute, '') + s +
                self.format.get('normal', ''))
 
    def punctuation(self, s):
        """Punctuation color."""
        return self.simple(s, 'normal')
 
    def field_value(self, s):
        """Field value color."""
        return self.simple(s, 'purple')
 
    def field_name(self, s):
        """Field name color."""
        return self.simple(s, 'blue')
 
    def class_name(self, s):
        """Class name color."""
        return self.format.get('red', '') + self.simple(s, 'bold')
 
output_format = RawOutputFormat()
 
 
class EventsCodes:
    """
    Set of codes corresponding to each kind of events.
    Some of these flags are used to communicate with inotify, whereas
    the others are sent to userspace by inotify notifying some events.
 
    @cvar IN_ACCESS: File was accessed.
    @type IN_ACCESS: int
    @cvar IN_MODIFY: File was modified.
    @type IN_MODIFY: int
    @cvar IN_ATTRIB: Metadata changed.
    @type IN_ATTRIB: int
    @cvar IN_CLOSE_WRITE: Writtable file was closed.
    @type IN_CLOSE_WRITE: int
    @cvar IN_CLOSE_NOWRITE: Unwrittable file closed.
    @type IN_CLOSE_NOWRITE: int
    @cvar IN_OPEN: File was opened.
    @type IN_OPEN: int
    @cvar IN_MOVED_FROM: File was moved from X.
    @type IN_MOVED_FROM: int
    @cvar IN_MOVED_TO: File was moved to Y.
    @type IN_MOVED_TO: int
    @cvar IN_CREATE: Subfile was created.
    @type IN_CREATE: int
    @cvar IN_DELETE: Subfile was deleted.
    @type IN_DELETE: int
    @cvar IN_DELETE_SELF: Self (watched item itself) was deleted.
    @type IN_DELETE_SELF: int
    @cvar IN_MOVE_SELF: Self (watched item itself) was moved.
    @type IN_MOVE_SELF: int
    @cvar IN_UNMOUNT: Backing fs was unmounted.
    @type IN_UNMOUNT: int
    @cvar IN_Q_OVERFLOW: Event queued overflowed.
    @type IN_Q_OVERFLOW: int
    @cvar IN_IGNORED: File was ignored.
    @type IN_IGNORED: int
    @cvar IN_ONLYDIR: only watch the path if it is a directory (new
                      in kernel 2.6.15).
    @type IN_ONLYDIR: int
    @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15).
                          IN_ONLYDIR we can make sure that we don't watch
                          the target of symlinks.
    @type IN_DONT_FOLLOW: int
    @cvar IN_MASK_ADD: add to the mask of an already existing watch (new
                       in kernel 2.6.14).
    @type IN_MASK_ADD: int
    @cvar IN_ISDIR: Event occurred against dir.
    @type IN_ISDIR: int
    @cvar IN_ONESHOT: Only send event once.
    @type IN_ONESHOT: int
    @cvar ALL_EVENTS: Alias for considering all of the events.
    @type ALL_EVENTS: int
    """
 
    # The idea here is 'configuration-as-code' - this way, we get
    # our nice class constants, but we also get nice human-friendly text
    # mappings to do lookups against as well, for free:
    FLAG_COLLECTIONS = {'OP_FLAGS': {
        'IN_ACCESS'        : 0x00000001,  # File was accessed
        'IN_MODIFY'        : 0x00000002,  # File was modified
        'IN_ATTRIB'        : 0x00000004,  # Metadata changed
        'IN_CLOSE_WRITE'   : 0x00000008,  # Writable file was closed
        'IN_CLOSE_NOWRITE' : 0x00000010,  # Unwritable file closed
        'IN_OPEN'          : 0x00000020,  # File was opened
        'IN_MOVED_FROM'    : 0x00000040,  # File was moved from X
        'IN_MOVED_TO'      : 0x00000080,  # File was moved to Y
        'IN_CREATE'        : 0x00000100,  # Subfile was created
        'IN_DELETE'        : 0x00000200,  # Subfile was deleted
        'IN_DELETE_SELF'   : 0x00000400,  # Self (watched item itself)
                                          # was deleted
        'IN_MOVE_SELF'     : 0x00000800,  # Self(watched item itself) was moved
        },
                        'EVENT_FLAGS': {
        'IN_UNMOUNT'       : 0x00002000,  # Backing fs was unmounted
        'IN_Q_OVERFLOW'    : 0x00004000,  # Event queued overflowed
        'IN_IGNORED'       : 0x00008000,  # File was ignored
        },
                        'SPECIAL_FLAGS': {
        'IN_ONLYDIR'       : 0x01000000,  # only watch the path if it is a
                                          # directory
        'IN_DONT_FOLLOW'   : 0x02000000,  # don't follow a symlink
        'IN_MASK_ADD'      : 0x20000000,  # add to the mask of an already
                                          # existing watch
        'IN_ISDIR'         : 0x40000000,  # event occurred against dir
        'IN_ONESHOT'       : 0x80000000,  # only send event once
        },
                        }
 
    def maskname(mask):
        """
        Returns the event name associated to mask. IN_ISDIR is appended to
        the result when appropriate. Note: only one event is returned, because
        only one event can be raised at a given time.
 
        @param mask: mask.
        @type mask: int
        @return: event name.
        @rtype: str
        """
        ms = mask
        name = '%s'
        if mask & IN_ISDIR:
            ms = mask - IN_ISDIR
            name = '%s|IN_ISDIR'
        return name % EventsCodes.ALL_VALUES[ms]
 
    maskname = staticmethod(maskname)
 
 
# So let's now turn the configuration into code
EventsCodes.ALL_FLAGS = {}
EventsCodes.ALL_VALUES = {}
for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items():
    # Make the collections' members directly accessible through the
    # class dictionary
    setattr(EventsCodes, flagc, valc)
 
    # Collect all the flags under a common umbrella
    EventsCodes.ALL_FLAGS.update(valc)
 
    # Make the individual masks accessible as 'constants' at globals() scope
    # and masknames accessible by values.
    for name, val in valc.items():
        globals()[name] = val
        EventsCodes.ALL_VALUES[val] = name
 
 
# all 'normal' events
ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values())
EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS
EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS'
 
 
class _Event:
    """
    Event structure, represent events raised by the system. This
    is the base class and should be subclassed.
 
    """
    def __init__(self, dict_):
        """
        Attach attributes (contained in dict_) to self.
 
        @param dict_: Set of attributes.
        @type dict_: dictionary
        """
        for tpl in dict_.items():
            setattr(self, *tpl)
 
    def __repr__(self):
        """
        @return: Generic event string representation.
        @rtype: str
        """
        s = ''
        for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]):
            if attr.startswith('_'):
                continue
            if attr == 'mask':
                value = hex(getattr(self, attr))
            elif isinstance(value, basestring) and not value:
                value = "''"
            s += ' %s%s%s' % (output_format.field_name(attr),
                              output_format.punctuation('='),
                              output_format.field_value(value))
 
        s = '%s%s%s %s' % (output_format.punctuation('<'),
                           output_format.class_name(self.__class__.__name__),
                           s,
                           output_format.punctuation('>'))
        return s
 
    def __str__(self):
        return repr(self)
 
 
class _RawEvent(_Event):
    """
    Raw event, it contains only the informations provided by the system.
    It doesn't infer anything.
    """
    def __init__(self, wd, mask, cookie, name):
        """
        @param wd: Watch Descriptor.
        @type wd: int
        @param mask: Bitmask of events.
        @type mask: int
        @param cookie: Cookie.
        @type cookie: int
        @param name: Basename of the file or directory against which the
                     event was raised in case where the watched directory
                     is the parent directory. None if the event was raised
                     on the watched item itself.
        @type name: string or None
        """
        # Use this variable to cache the result of str(self), this object
        # is immutable.
        self._str = None
        # name: remove trailing '\0'
        d = {'wd': wd,
             'mask': mask,
             'cookie': cookie,
             'name': name.rstrip('\0')}
        _Event.__init__(self, d)
        logging.debug(str(self))
 
    def __str__(self):
        if self._str is None:
            self._str = _Event.__str__(self)
        return self._str
 
 
class Event(_Event):
    """
    This class contains all the useful informations about the observed
    event. However, the presence of each field is not guaranteed and
    depends on the type of event. In effect, some fields are irrelevant
    for some kind of event (for example 'cookie' is meaningless for
    IN_CREATE whereas it is mandatory for IN_MOVE_TO).
 
    The possible fields are:
      - wd (int): Watch Descriptor.
      - mask (int): Mask.
      - maskname (str): Readable event name.
      - path (str): path of the file or directory being watched.
      - name (str): Basename of the file or directory against which the
              event was raised in case where the watched directory
              is the parent directory. None if the event was raised
              on the watched item itself. This field is always provided
              even if the string is ''.
      - pathname (str): Concatenation of 'path' and 'name'.
      - src_pathname (str): Only present for IN_MOVED_TO events and only in
              the case where IN_MOVED_FROM events are watched too. Holds the
              source pathname from where pathname was moved from.
      - cookie (int): Cookie.
      - dir (bool): True if the event was raised against a directory.
 
    """
    def __init__(self, raw):
        """
        Concretely, this is the raw event plus inferred infos.
        """
        _Event.__init__(self, raw)
        self.maskname = EventsCodes.maskname(self.mask)
        if COMPATIBILITY_MODE:
            self.event_name = self.maskname
        try:
            if self.name:
                self.pathname = os.path.abspath(os.path.join(self.path,
                                                             self.name))
            else:
                self.pathname = os.path.abspath(self.path)
        except AttributeError, err:
            # Usually it is not an error some events are perfectly valids
            # despite the lack of these attributes.
            logging.debug(err)
 
 
class _ProcessEvent:
    """
    Abstract processing event class.
    """
    def __call__(self, event):
        """
        To behave like a functor the object must be callable.
        This method is a dispatch method. Its lookup order is:
          1. process_MASKNAME method
          2. process_FAMILY_NAME method
          3. otherwise calls process_default
 
        @param event: Event to be processed.
        @type event: Event object
        @return: By convention when used from the ProcessEvent class:
                 - Returning False or None (default value) means keep on
                 executing next chained functors (see chain.py example).
                 - Returning True instead means do not execute next
                   processing functions.
        @rtype: bool
        @raise ProcessEventError: Event object undispatchable,
                                  unknown event.
        """
        stripped_mask = event.mask - (event.mask & IN_ISDIR)
        maskname = EventsCodes.ALL_VALUES.get(stripped_mask)
        if maskname is None:
            raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask)
 
        # 1- look for process_MASKNAME
        meth = getattr(self, 'process_' + maskname, None)
        if meth is not None:
            return meth(event)
        # 2- look for process_FAMILY_NAME
        meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None)
        if meth is not None:
            return meth(event)
        # 3- default call method process_default
        return self.process_default(event)
 
    def __repr__(self):
        return '<%s>' % self.__class__.__name__
 
 
class ProcessEvent(_ProcessEvent):
    """
    Process events objects, can be specialized via subclassing, thus its
    behavior can be overriden:
 
    Note: you should not override __init__ in your subclass instead define
    a my_init() method, this method will be called automatically from the
    constructor of this class with its optionals parameters.
 
      1. Provide specialized individual methods, e.g. process_IN_DELETE for
         processing a precise type of event (e.g. IN_DELETE in this case).
      2. Or/and provide methods for processing events by 'family', e.g.
         process_IN_CLOSE method will process both IN_CLOSE_WRITE and
         IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and
         process_IN_CLOSE_NOWRITE aren't defined though).
      3. Or/and override process_default for catching and processing all
         the remaining types of events.
    """
    pevent = None
 
    def __init__(self, pevent=None, **kargs):
        """
        Enable chaining of ProcessEvent instances.
 
        @param pevent: Optional callable object, will be called on event
                       processing (before self).
        @type pevent: callable
        @param kargs: This constructor is implemented as a template method
                      delegating its optionals keyworded arguments to the
                      method my_init().
        @type kargs: dict
        """
        self.pevent = pevent
        self.my_init(**kargs)
 
    def my_init(self, **kargs):
        """
        This method is called from ProcessEvent.__init__(). This method is
        empty here and must be redefined to be useful. In effect, if you
        need to specifically initialize your subclass' instance then you
        just have to override this method in your subclass. Then all the
        keyworded arguments passed to ProcessEvent.__init__() will be
        transmitted as parameters to this method. Beware you MUST pass
        keyword arguments though.
 
        @param kargs: optional delegated arguments from __init__().
        @type kargs: dict
        """
        pass
 
    def __call__(self, event):
        stop_chaining = False
        if self.pevent is not None:
            # By default methods return None so we set as guideline
            # that methods asking for stop chaining must explicitely
            # return non None or non False values, otherwise the default
            # behavior will be to accept chain call to the corresponding
            # local method.
            stop_chaining = self.pevent(event)
        if not stop_chaining:
            return _ProcessEvent.__call__(self, event)
 
    def nested_pevent(self):
        return self.pevent
 
    def process_IN_Q_OVERFLOW(self, event):
        """
        By default this method only reports warning messages, you can
        overredide it by subclassing ProcessEvent and implement your own
        process_IN_Q_OVERFLOW method. The actions you can take on receiving
        this event is either to update the variable max_queued_events in order
        to handle more simultaneous events or to modify your code in order to
        accomplish a better filtering diminishing the number of raised events.
        Because this method is defined, IN_Q_OVERFLOW will never get
        transmitted as arguments to process_default calls.
 
        @param event: IN_Q_OVERFLOW event.
        @type event: dict
        """
        log.warning('Event queue overflowed.')
 
    def process_default(self, event):
        """
        Default processing event method. By default does nothing. Subclass
        ProcessEvent and redefine this method in order to modify its behavior.
 
        @param event: Event to be processed. Can be of any type of events but
                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
        @type event: Event instance
        """
        pass
 
 
class PrintAllEvents(ProcessEvent):
    """
    Dummy class used to print events strings representations. For instance this
    class is used from command line to print all received events to stdout.
    """
    def my_init(self, out=None):
        """
        @param out: Where events will be written.
        @type out: Object providing a valid file object interface.
        """
        if out is None:
            out = sys.stdout
        self._out = out
 
    def process_default(self, event):
        """
        Writes event string representation to file object provided to
        my_init().
 
        @param event: Event to be processed. Can be of any type of events but
                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
        @type event: Event instance
        """
        self._out.write(str(event))
        self._out.write('\n')
        self._out.flush()
 
 
class WatchManagerError(Exception):
    """
    WatchManager Exception. Raised on error encountered on watches
    operations.
 
    """
    def __init__(self, msg, wmd):
        """
        @param msg: Exception string's description.
        @type msg: string
        @param wmd: This dictionary contains the wd assigned to paths of the
                    same call for which watches were successfully added.
        @type wmd: dict
        """
        self.wmd = wmd
        Exception.__init__(self, msg)

Unfortunatly we need to implement the code that talks with the Win32 API to be able to retrieve the events in the file system. In my design this is done by the Watch class that looks like this:

# Author: Manuel de la Pena <manuel@canonical.com>
#
# Copyright 2011 Canonical Ltd.
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranties of
# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
# PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program.  If not, see <http://www.gnu.org/licenses/>.
"""File notifications on windows."""
 
import logging
import os
import re
 
import winerror
 
from Queue import Queue, Empty
from threading import Thread
from uuid import uuid4
from twisted.internet import task, reactor
from win32con import (
    FILE_SHARE_READ,
    FILE_SHARE_WRITE,
    FILE_FLAG_BACKUP_SEMANTICS,
    FILE_NOTIFY_CHANGE_FILE_NAME,
    FILE_NOTIFY_CHANGE_DIR_NAME,
    FILE_NOTIFY_CHANGE_ATTRIBUTES,
    FILE_NOTIFY_CHANGE_SIZE,
    FILE_NOTIFY_CHANGE_LAST_WRITE,
    FILE_NOTIFY_CHANGE_SECURITY,
    OPEN_EXISTING
)
from win32file import CreateFile, ReadDirectoryChangesW
from ubuntuone.platform.windows.pyinotify import (
    Event,
    WatchManagerError,
    ProcessEvent,
    PrintAllEvents,
    IN_OPEN,
    IN_CLOSE_NOWRITE,
    IN_CLOSE_WRITE,
    IN_CREATE,
    IN_ISDIR,
    IN_DELETE,
    IN_MOVED_FROM,
    IN_MOVED_TO,
    IN_MODIFY,
    IN_IGNORED
)
from ubuntuone.syncdaemon.filesystem_notifications import (
    GeneralINotifyProcessor
)
from ubuntuone.platform.windows.os_helper import (
    LONG_PATH_PREFIX,
    abspath,
    listdir
)
 
# constant found in the msdn documentation:
# http://msdn.microsoft.com/en-us/library/ff538834(v=vs.85).aspx
FILE_LIST_DIRECTORY = 0x0001
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
FILE_NOTIFY_CHANGE_CREATION = 0x00000040
 
# a map between the few events that we have on windows and those
# found in pyinotify
WINDOWS_ACTIONS = {
  1: IN_CREATE,
  2: IN_DELETE,
  3: IN_MODIFY,
  4: IN_MOVED_FROM,
  5: IN_MOVED_TO
}
 
# translates quickly the event and it's is_dir state to our standard events
NAME_TRANSLATIONS = {
    IN_OPEN: 'FS_FILE_OPEN',
    IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE',
    IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE',
    IN_CREATE: 'FS_FILE_CREATE',
    IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE',
    IN_DELETE: 'FS_FILE_DELETE',
    IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE',
    IN_MOVED_FROM: 'FS_FILE_DELETE',
    IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE',
    IN_MOVED_TO: 'FS_FILE_CREATE',
    IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE',
}
 
# the default mask to be used in the watches added by the FilesystemMonitor
# class
FILESYSTEM_MONITOR_MASK = FILE_NOTIFY_CHANGE_FILE_NAME | \
    FILE_NOTIFY_CHANGE_DIR_NAME | \
    FILE_NOTIFY_CHANGE_ATTRIBUTES | \
    FILE_NOTIFY_CHANGE_SIZE | \
    FILE_NOTIFY_CHANGE_LAST_WRITE | \
    FILE_NOTIFY_CHANGE_SECURITY | \
    FILE_NOTIFY_CHANGE_LAST_ACCESS
 
 
# The implementation of the code that is provided as the pyinotify
# substitute
class Watch(object):
    """Implement the same functions as pyinotify.Watch."""
 
    def __init__(self, watch_descriptor, path, mask, auto_add,
        events_queue=None, exclude_filter=None, proc_fun=None):
        super(Watch, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.' +
            'filesystem_notifications.Watch')
        self._watching = False
        self._descriptor = watch_descriptor
        self._auto_add = auto_add
        self.exclude_filter = None
        self._proc_fun = proc_fun
        self._cookie = None
        self._source_pathname = None
        # remember the subdirs we have so that when we have a delete we can
        # check if it was a remove
        self._subdirs = []
        # ensure that we work with an abspath and that we can deal with
        # long paths over 260 chars.
        self._path = os.path.abspath(path)
        if not self._path.startswith(LONG_PATH_PREFIX):
            self._path = LONG_PATH_PREFIX + self._path
        self._mask = mask
        # lets make the q as big as possible
        self._raw_events_queue = Queue()
        if not events_queue:
            events_queue = Queue()
        self.events_queue = events_queue
 
    def _path_is_dir(self, path):
        """"Check if the path is a dir and update the local subdir list."""
        self.log.debug('Testing if path "%s" is a dir', path)
        is_dir = False
        if os.path.exists(path):
            is_dir = os.path.isdir(path)
        else:
            self.log.debug('Path "%s" was deleted subdirs are %s.',
                path, self._subdirs)
            # we removed the path, we look in the internal list
            if path in self._subdirs:
                is_dir = True
                self._subdirs.remove(path)
        if is_dir:
            self.log.debug('Adding %s to subdirs %s', path, self._subdirs)
            self._subdirs.append(path)
        return is_dir
 
    def _process_events(self):
        """Process the events form the queue."""
        # we transform the events to be the same as the one in pyinotify
        # and then use the proc_fun
        while self._watching or not self._raw_events_queue.empty():
            file_name, action = self._raw_events_queue.get()
            # map the windows events to the pyinotify ones, tis is dirty but
            # makes the multiplatform better, linux was first :P
            is_dir = self._path_is_dir(file_name)
            if os.path.exists(file_name):
                is_dir = os.path.isdir(file_name)
            else:
                # we removed the path, we look in the internal list
                if file_name in self._subdirs:
                    is_dir = True
                    self._subdirs.remove(file_name)
            if is_dir:
                self._subdirs.append(file_name)
            mask = WINDOWS_ACTIONS[action]
            head, tail = os.path.split(file_name)
            if is_dir:
                mask |= IN_ISDIR
            event_raw_data = {
                'wd': self._descriptor,
                'dir': is_dir,
                'mask': mask,
                'name': tail,
                'path': head.replace(self.path, '.')
            }
            # by the way in which the win api fires the events we know for
            # sure that no move events will be added in the wrong order, this
            # is kind of hacky, I dont like it too much
            if WINDOWS_ACTIONS[action] == IN_MOVED_FROM:
                self._cookie = str(uuid4())
                self._source_pathname = tail
                event_raw_data['cookie'] = self._cookie
            if WINDOWS_ACTIONS[action] == IN_MOVED_TO:
                event_raw_data['src_pathname'] = self._source_pathname
                event_raw_data['cookie'] = self._cookie
            event = Event(event_raw_data)
            # FIXME: event deduces the pathname wrong and we need manually
            # set it
            event.pathname = file_name
            # add the event only if we do not have an exclude filter or
            # the exclude filter returns False, that is, the event will not
            # be excluded
            if not self.exclude_filter or not self.exclude_filter(event):
                self.log.debug('Addding event %s to queue.', event)
                self.events_queue.put(event)
 
    def _watch(self):
        """Watch a path that is a directory."""
        # we are going to be using the ReadDirectoryChangesW whihc requires
        # a direcotry handle and the mask to be used.
        handle = CreateFile(
            self._path,
            FILE_LIST_DIRECTORY,
            FILE_SHARE_READ | FILE_SHARE_WRITE,
            None,
            OPEN_EXISTING,
            FILE_FLAG_BACKUP_SEMANTICS,
            None
        )
        self.log.debug('Watchng path %s.', self._path)
        while self._watching:
            # important information to know about the parameters:
            # param 1: the handle to the dir
            # param 2: the size to be used in the kernel to store events
            # that might be lost whilw the call is being performed. This
            # is complicates to fine tune since if you make lots of watcher
            # you migh used to much memory and make your OS to BSOD
            results = ReadDirectoryChangesW(
                handle,
                1024,
                self._auto_add,
                self._mask,
                None,
                None
            )
            # add the diff events to the q so that the can be processed no
            # matter the speed.
            for action, file in results:
                full_filename = os.path.join(self._path, file)
                self._raw_events_queue.put((full_filename, action))
                self.log.debug('Added %s to raw events queue.',
                    (full_filename, action))
 
    def start_watching(self):
        """Tell the watch to start processing events."""
        # get the diff dirs in the path
        for current_child in listdir(self._path):
            full_child_path = os.path.join(self._path, current_child)
            if os.path.isdir(full_child_path):
                self._subdirs.append(full_child_path)
        # start to diff threads, one to watch the path, the other to
        # process the events.
        self.log.debug('Sart watching path.')
        self._watching = True
        watch_thread = Thread(target=self._watch,
            name='Watch(%s)' % self._path)
        process_thread = Thread(target=self._process_events,
            name='Process(%s)' % self._path)
        process_thread.start()
        watch_thread.start()
 
    def stop_watching(self):
        """Tell the watch to stop processing events."""
        self._watching = False
        self._subdirs = []
 
    def update(self, mask, proc_fun=None, auto_add=False):
        """Update the info used by the watcher."""
        self.log.debug('update(%s, %s, %s)', mask, proc_fun, auto_add)
        self._mask = mask
        self._proc_fun = proc_fun
        self._auto_add = auto_add
 
    @property
    def path(self):
        """Return the patch watched."""
        return self._path
 
    @property
    def auto_add(self):
        return self._auto_add
 
    @property
    def proc_fun(self):
        return self._proc_fun
 
 
class WatchManager(object):
    """Implement the same functions as pyinotify.WatchManager."""
 
    def __init__(self, exclude_filter=lambda path: False):
        """Init the manager to keep trak of the different watches."""
        super(WatchManager, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.'
            + 'filesystem_notifications.WatchManager')
        self._wdm = {}
        self._wd_count = 0
        self._exclude_filter = exclude_filter
        self._events_queue = Queue()
        self._ignored_paths = []
 
    def stop(self):
        """Close the manager and stop all watches."""
        self.log.debug('Stopping watches.')
        for current_wd in self._wdm:
            self._wdm[current_wd].stop_watching()
            self.log.debug('Watch for %s stopped.', self._wdm[current_wd].path)
 
    def get_watch(self, wd):
        """Return the watch with the given descriptor."""
        return self._wdm[wd]
 
    def del_watch(self, wd):
        """Delete the watch with the given descriptor."""
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            del self._wdm[wd]
            self.log.debug('Watch %s removed.', wd)
        except KeyError, e:
            logging.error(str(e))
 
    def _add_single_watch(self, path, mask, proc_fun=None, auto_add=False,
        quiet=True, exclude_filter=None):
        self.log.debug('add_single_watch(%s, %s, %s, %s, %s, %s)', path, mask,
            proc_fun, auto_add, quiet, exclude_filter)
        self._wdm[self._wd_count] = Watch(self._wd_count, path, mask,
            auto_add, events_queue=self._events_queue,
            exclude_filter=exclude_filter, proc_fun=proc_fun)
        self._wdm[self._wd_count].start_watching()
        self._wd_count += 1
        self.log.debug('Watch count increased to %s', self._wd_count)
 
    def add_watch(self, path, mask, proc_fun=None, auto_add=False,
        quiet=True, exclude_filter=None):
        if hasattr(path, '__iter__'):
            self.log.debug('Added collection of watches.')
            # we are dealing with a collection of paths
            for current_path in path:
                if not self.get_wd(current_path):
                    self._add_single_watch(current_path, mask, proc_fun,
                        auto_add, quiet, exclude_filter)
        elif not self.get_wd(path):
            self.log.debug('Adding single watch.')
            self._add_single_watch(path, mask, proc_fun, auto_add,
                quiet, exclude_filter)
 
    def update_watch(self, wd, mask=None, proc_fun=None, rec=False,
                     auto_add=False, quiet=True):
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            self.log.debug('Stopped watch on %s for update.', watch.path)
            # update the data and restart watching
            auto_add = auto_add or rec
            watch.update(mask, proc_fun=proc_fun, auto_add=auto_add)
            # only start the watcher again if the mask was given, otherwhise
            # we are not watchng and therefore do not care
            if mask:
                watch.start_watching()
        except KeyError, e:
            self.log.error(str(e))
            if not quiet:
                raise WatchManagerError('Watch %s was not found' % wd, {})
 
    def get_wd(self, path):
        """Return the watcher that is used to watch the given path."""
        for current_wd in self._wdm:
            if self._wdm[current_wd].path in path and \
                self._wdm[current_wd].auto_add:
                return current_wd
 
    def get_path(self, wd):
        """Return the path watched by the wath with the given wd."""
        watch_ = self._wmd.get(wd)
        if watch:
            return watch.path
 
    def rm_watch(self, wd, rec=False, quiet=True):
        """Remove the the watch with the given wd."""
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            del self._wdm[wd]
        except KeyrError, err:
            self.log.error(str(err))
            if not quiet:
                raise WatchManagerError('Watch %s was not found' % wd, {})
 
    def rm_path(self, path):
        """Remove a watch to the given path."""
        # it would be very tricky to remove a subpath from a watcher that is
        # looking at changes in ther kids. To make it simpler and less error
        # prone (and even better performant since we use less threads) we will
        # add a filter to the events in the watcher so that the events from
        # that child are not received :)
        def ignore_path(event):
            """Ignore an event if it has a given path."""
            is_ignored = False
            for ignored_path in self._ignored_paths:
                if ignore_path in event.pathname:
                    return True
            return False
 
        wd = self.get_wd(path)
        if wd:
            if self._wdm[wd].path == path:
                self.log.debug('Removing watch for path "%s"', path)
                self.rm_watch(wd)
            else:
                self.log.debug('Adding exclude filter for "%s"', path)
                # we have a watch that cotains the path as a child path
                if not path in self._ignored_paths:
                    self._ignored_paths.append(path)
                # FIXME: This assumes that we do not have other function
                # which in our usecase is correct, but what is we move this
                # to other projects evet?!? Maybe using the manager
                # exclude_filter is better
                if not self._wdm[wd].exclude_filter:
                    self._wdm[wd].exclude_filter = ignore_path
 
    @property
    def watches(self):
        """Return a reference to the dictionary that contains the watches."""
        return self._wdm
 
    @property
    def events_queue(self):
        """Return the queue with the events that the manager contains."""
        return self._events_queue
 
 
class Notifier(object):
    """
    Read notifications, process events. Inspired by the pyinotify.Notifier
    """
 
    def __init__(self, watch_manager, default_proc_fun=None, read_freq=0,
                 threshold=10, timeout=-1):
        """Init to process event according to the given timeout & threshold."""
        super(Notifier, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.'
            + 'filesystem_notifications.Notifier')
        # Watch Manager instance
        self._watch_manager = watch_manager
        # Default processing method
        self._default_proc_fun = default_proc_fun
        if default_proc_fun is None:
            self._default_proc_fun = PrintAllEvents()
        # Loop parameters
        self._read_freq = read_freq
        self._threshold = threshold
        self._timeout = timeout
 
    def proc_fun(self):
        return self._default_proc_fun
 
    def process_events(self):
        """
        Process the event given the threshold and the timeout.
        """
        self.log.debug('Processing events with threashold: %s and timeout: %s',
            self._threshold, self._timeout)
        # we will process an amount of events equal to the threshold of
        # the notifier and will block for the amount given by the timeout
        processed_events = 0
        while processed_events < self._threshold:
            try:
                raw_event = None
                if not self._timeout or self._timeout < 0:
                    raw_event = self._watch_manager.events_queue.get(
                        block=False)
                else:
                    raw_event = self._watch_manager.events_queue.get(
                        timeout=self._timeout)
                watch = self._watch_manager.get_watch(raw_event.wd)
                if watch is None:
                    # Not really sure how we ended up here, nor how we should
                    # handle these types of events and if it is appropriate to
                    # completly skip them (like we are doing here).
                    self.log.warning('Unable to retrieve Watch object '
                        + 'associated to %s', raw_event)
                    processed_events += 1
                    continue
                if watch and watch.proc_fun:
                    self.log.debug('Executing proc_fun from watch.')
                    watch.proc
          Manuel de la Pena: Finding open files on Windows   

Yet again Windows has presented me a challenge when trying to work with its file system, this time in the form of lock files. The Ubuntu One client on linux uses pyinotify to be able to listen to the file system events this, for example, allows the daemon to be updating your files when a new version has been created without the direct intervention of the user.

Although Windows does not have pyinotify (for obvious reasons) a developer that wants to perform such a directory monitoring can rely on the ReadDirectoryChangesW function. This function provides a similar behavior but unfortunately the information it provides is limited when compared with the one from pyinotify. On one hand, there are less events you can listen on Windows (IN_OPEN and IN_CLOSE for example are not present) but it also provides very little information by just giving 5 actions back, that is while on Windows you can listen to:

  • FILE_NOTIFY_CHANGE_FILE_NAME
  • FILE_NOTIFY_CHANGE_DIR_NAME
  • FILE_NOTIFY_CHANGE_ATTRIBUTES
  • FILE_NOTIFY_CHANGE_SIZE
  • FILE_NOTIFY_CHANGE_LAST_WRITE
  • FILE_NOTIFY_CHANGE_LAST_ACCESS
  • FILE_NOTIFY_CHANGE_CREATION
  • FILE_NOTIFY_CHANGE_SECURITY

You will only get back 5 values which are integers that represent the action that was performed. YesterdayI decide to see if it was possible to query the Windows Object Manager to see the currently used FILE HANDLES which would returned the open files. My idea was to write such a function and the pool (ouch!) to find when a file was opened or close. The result of such an attempt is the following:

import os
import struct
 
import winerror
import win32file
import win32con
 
from ctypes import *
from ctypes.wintypes import *
from Queue import Queue
from threading import Thread
from win32api import GetCurrentProcess, OpenProcess, DuplicateHandle
from win32api import error as ApiError
from win32con import (
    FILE_SHARE_READ,
    FILE_SHARE_WRITE,
    OPEN_EXISTING,
    FILE_FLAG_BACKUP_SEMANTICS,
    FILE_NOTIFY_CHANGE_FILE_NAME,
    FILE_NOTIFY_CHANGE_DIR_NAME,
    FILE_NOTIFY_CHANGE_ATTRIBUTES,
    FILE_NOTIFY_CHANGE_SIZE,
    FILE_NOTIFY_CHANGE_LAST_WRITE,
    FILE_NOTIFY_CHANGE_SECURITY,
    DUPLICATE_SAME_ACCESS
)
from win32event import WaitForSingleObject, WAIT_TIMEOUT, WAIT_ABANDONED
from win32event import error as EventError
from win32file import CreateFile, ReadDirectoryChangesW, CloseHandle
from win32file import error as FileError
 
# from ubuntuone.platform.windows.os_helper import LONG_PATH_PREFIX, abspath
 
LONG_PATH_PREFIX = '\\\\?\\'
# constant found in the msdn documentation:
# http://msdn.microsoft.com/en-us/library/ff538834(v=vs.85).aspx
FILE_LIST_DIRECTORY = 0x0001
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
FILE_NOTIFY_CHANGE_CREATION = 0x00000040
 
# XXX: the following code is some kind of hack that allows to get the opened
# files in a system. The techinique uses an no documented API from windows nt
# that is internal to MS and might change in the future braking our code :(
UCHAR = c_ubyte
PVOID = c_void_p
 
ntdll = windll.ntdll
 
SystemHandleInformation = 16
STATUS_INFO_LENGTH_MISMATCH = 0xC0000004
STATUS_BUFFER_OVERFLOW = 0x80000005L
STATUS_INVALID_HANDLE = 0xC0000008L
STATUS_BUFFER_TOO_SMALL = 0xC0000023L
STATUS_SUCCESS = 0
 
CURRENT_PROCESS = GetCurrentProcess ()
DEVICE_DRIVES = {}
for d in "abcdefghijklmnopqrstuvwxyz":
    try:
        DEVICE_DRIVES[win32file.QueryDosDevice (d + ":").strip ("\x00").lower ()] = d + ":"
    except FileError, (errno, errctx, errmsg):
        if errno == 2:
            pass
        else:
          raise
 
class x_file_handles(Exception):
    pass
 
def signed_to_unsigned(signed):
    unsigned, = struct.unpack ("L", struct.pack ("l", signed))
    return unsigned
 
class SYSTEM_HANDLE_TABLE_ENTRY_INFO(Structure):
    """Represent the SYSTEM_HANDLE_TABLE_ENTRY_INFO on ntdll."""
    _fields_ = [
        ("UniqueProcessId", USHORT),
        ("CreatorBackTraceIndex", USHORT),
        ("ObjectTypeIndex", UCHAR),
        ("HandleAttributes", UCHAR),
        ("HandleValue", USHORT),
        ("Object", PVOID),
        ("GrantedAccess", ULONG),
    ]
 
class SYSTEM_HANDLE_INFORMATION(Structure):
    """Represent the SYSTEM_HANDLE_INFORMATION on ntdll."""
    _fields_ = [
        ("NumberOfHandles", ULONG),
        ("Handles", SYSTEM_HANDLE_TABLE_ENTRY_INFO * 1),
    ]
 
class LSA_UNICODE_STRING(Structure):
    """Represent the LSA_UNICODE_STRING on ntdll."""
    _fields_ = [
        ("Length", USHORT),
        ("MaximumLength", USHORT),
        ("Buffer", LPWSTR),
    ]
 
class PUBLIC_OBJECT_TYPE_INFORMATION(Structure):
    """Represent the PUBLIC_OBJECT_TYPE_INFORMATION on ntdll."""
    _fields_ = [
        ("Name", LSA_UNICODE_STRING),
        ("Reserved", ULONG * 22),
    ]
 
class OBJECT_NAME_INFORMATION (Structure):
    """Represent the OBJECT_NAME_INFORMATION on ntdll."""
    _fields_ = [
        ("Name", LSA_UNICODE_STRING),
    ]
 
class IO_STATUS_BLOCK_UNION (Union):
    """Represent the IO_STATUS_BLOCK_UNION on ntdll."""
    _fields_ = [
        ("Status", LONG),
        ("Pointer", PVOID),
    ]
 
class IO_STATUS_BLOCK (Structure):
    """Represent the IO_STATUS_BLOCK on ntdll."""
    _anonymous_ = ("u",)
    _fields_ = [
        ("u", IO_STATUS_BLOCK_UNION),
        ("Information", POINTER (ULONG)),
    ]
 
class FILE_NAME_INFORMATION (Structure):
    """Represent the on FILE_NAME_INFORMATION ntdll."""
    filename_size = 4096
    _fields_ = [
        ("FilenameLength", ULONG),
        ("FileName", WCHAR * filename_size),
    ]
 
def get_handles():
    """Return all the processes handles in the system atm."""
    system_handle_information = SYSTEM_HANDLE_INFORMATION()
    size = DWORD (sizeof (system_handle_information))
    while True:
        result = ntdll.NtQuerySystemInformation(
            SystemHandleInformation,
            byref(system_handle_information),
            size,
            byref(size)
        )
        result = signed_to_unsigned(result)
        if result == STATUS_SUCCESS:
            break
        elif result == STATUS_INFO_LENGTH_MISMATCH:
            size = DWORD(size.value * 4)
            resize(system_handle_information, size.value)
        else:
            raise x_file_handles("NtQuerySystemInformation", hex(result))
 
    pHandles = cast(
        system_handle_information.Handles,
        POINTER(SYSTEM_HANDLE_TABLE_ENTRY_INFO * \
                system_handle_information.NumberOfHandles)
    )
    for handle in pHandles.contents:
        yield handle.UniqueProcessId, handle.HandleValue
 
def get_process_handle (pid, handle):
    """Get a handle for the process with the given pid."""
    try:
        hProcess = OpenProcess(win32con.PROCESS_DUP_HANDLE, 0, pid)
        return DuplicateHandle(hProcess, handle, CURRENT_PROCESS,
            0, 0, DUPLICATE_SAME_ACCESS)
    except ApiError,(errno, errctx, errmsg):
        if errno in (
              winerror.ERROR_ACCESS_DENIED,
              winerror.ERROR_INVALID_PARAMETER,
              winerror.ERROR_INVALID_HANDLE,
              winerror.ERROR_NOT_SUPPORTED
        ):
            return None
        else:
            raise
 
 
def get_type_info (handle):
    """Get the handle type information."""
    public_object_type_information = PUBLIC_OBJECT_TYPE_INFORMATION()
    size = DWORD(sizeof(public_object_type_information))
    while True:
        result = signed_to_unsigned(
            ntdll.NtQueryObject(
                handle, 2, byref(public_object_type_information), size, None))
        if result == STATUS_SUCCESS:
            return public_object_type_information.Name.Buffer
        elif result == STATUS_INFO_LENGTH_MISMATCH:
            size = DWORD(size.value * 4)
            resize(public_object_type_information, size.value)
        elif result == STATUS_INVALID_HANDLE:
            return None
        else:
            raise x_file_handles("NtQueryObject.2", hex (result))
 
 
def get_name_info (handle):
    """Get the handle name information."""
    object_name_information = OBJECT_NAME_INFORMATION()
    size = DWORD(sizeof(object_name_information))
    while True:
        result = signed_to_unsigned(
            ntdll.NtQueryObject(handle, 1, byref (object_name_information),
            size, None))
        if result == STATUS_SUCCESS:
            return object_name_information.Name.Buffer
        elif result in (STATUS_BUFFER_OVERFLOW, STATUS_BUFFER_TOO_SMALL,
                        STATUS_INFO_LENGTH_MISMATCH):
            size = DWORD(size.value * 4)
            resize (object_name_information, size.value)
        else:
            return None
 
 
def filepath_from_devicepath (devicepath):
    """Return a file path from a device path."""
    if devicepath is None:
        return None
    devicepath = devicepath.lower()
    for device, drive in DEVICE_DRIVES.items():
        if devicepath.startswith(device):
            return drive + devicepath[len(device):]
    else:
        return devicepath
 
def get_real_path(path):
    """Return the real path avoiding issues with the Library a in Windows 7"""
    assert os.path.isdir(path)
    handle = CreateFile(
        path,
        FILE_LIST_DIRECTORY,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        None,
        OPEN_EXISTING,
        FILE_FLAG_BACKUP_SEMANTICS,
        None
    )
    name = get_name_info(int(handle))
    CloseHandle(handle)
    return filepath_from_devicepath(name)
 
def get_open_file_handles():
    """Return all the open file handles."""
    print 'get_open_file_handles'
    result = set()
    this_pid = os.getpid()
    for pid, handle in get_handles():
        if pid == this_pid:
            continue
        duplicate = get_process_handle(pid, handle)
        if duplicate is None:
            continue
        else:
            # get the type info and name info of the handle
            type = get_type_info(handle)
            name = get_name_info(handle)
            # add the handle to the result only if it is a file
            if type and type == 'File':
                # the name info represents the path to the object,
                # we need to convert it to a file path and then
                # test that it does exist
                if name:
                    file_path = filepath_from_devicepath(name)
                    if os.path.exists(file_path):
                        result.add(file_path)
    return result
 
def get_open_file_handles_under_directory(directory):
    """get the open files under a directory."""
    result = set()
    all_handles = get_open_file_handles()
    # to avoid issues with Libraries on Windows 7 and later, we will
    # have to get the real path
    directory = get_real_path(os.path.abspath(directory))
    print 'Dir ' + directory
    if not directory.endswith(os.path.sep):
        directory += os.path.sep
    for file in all_handles:
        print 'Current file ' + file
        if directory in file:
            result.add(file)
    return result

The above code uses undocumented functions from the ntdll which I supposed Microsoft does not want me to use. An while it works, the solution does no scale since the process of querying the Object Manager is vey expensive and can rocket your CPU if performed several times. Nevertheless the above code works correctly and could be used to write a tools similar to those written by sysinternals.

I hope someone will find a use for the code, in my case it is code that I’ll have to throw away :(


          Manuel de la Pena: Setting file security attributes on Windows   

While working on making the Ubuntu One code more multiplatform I founded myself having to write some code that would set the attributes of a file on Windows. Ideally os.chmod would do the trick, but of course this is windows, and it is not fully supported. According to the python documentation:

Note: Although Windows supports chmod(), you can only set the file’s read-only flag with it (via the stat.S_IWRITE and stat.S_IREAD constants or a corresponding integer value). All other bits are ignored.

Grrrreat… To solve this issue I have written a small function that will allow to set the attributes of a file by using the win32api and win32security modules. This solves partially the issues since 0444 and others cannot be perfectly map to the Windows world. In my code I have made the assumption that using the groups ‘Everyone’, ‘Administrators’ and the user name would be close enough for our use cases.

Here is the code in case anyone has to go through this:

from win32api import MoveFileEx, GetUserName
 
from win32file import (
    MOVEFILE_COPY_ALLOWED,
    MOVEFILE_REPLACE_EXISTING,
    MOVEFILE_WRITE_THROUGH
)
from win32security import (
    LookupAccountName,
    GetFileSecurity,
    SetFileSecurity,
    ACL,
    DACL_SECURITY_INFORMATION,
    ACL_REVISION
)
from ntsecuritycon import (
    FILE_ALL_ACCESS,
    FILE_GENERIC_EXECUTE,
    FILE_GENERIC_READ,
    FILE_GENERIC_WRITE,
    FILE_LIST_DIRECTORY
)
 
EVERYONE_GROUP = 'Everyone'
ADMINISTRATORS_GROUP = 'Administrators'
 
def _get_group_sid(group_name):
    """Return the SID for a group with the given name."""
    return LookupAccountName('', group_name)[0]
 
 
def _set_file_attributes(path, groups):
    """Set file attributes using the wind32api."""
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = ACL()
    for group_name in groups:
        # set the attributes of the group only if not null
        if groups[group_name]:
            group_sid = _get_group_sid(group_name)
            dacl.AddAccessAllowedAce(ACL_REVISION, groups[group_name],
                group_sid)
    # the dacl has all the info of the dff groups passed in the parameters
    security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
    SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)
 
def set_file_readonly(path):
    """Change path permissions to readonly in a file."""
    # we use the win32 api because chmod just sets the readonly flag and
    # we want to have imore control over the permissions
    groups = {}
    groups[EVERYONE_GROUP] = FILE_GENERIC_READ
    groups[ADMINISTRATORS_GROUP] = FILE_GENERIC_READ
    groups[GetUserName()] = FILE_GENERIC_READ
    # the above equals more or less to 0444
    _set_file_attributes(path, groups)

For those who might want to remove the read access from a group, you just have to not pass the group in the groups parameter which would remove the group from the security descriptor.


          Manuel de la Pena: Network status changes on Windows with python   

At the moment I am sprinting in Argentina trying to make the Ubuntu One port to Windows better by adding support to the sync daemon used on Linux. While the rest of the guys are focused in accomodating the current code to my “multiplatform” requirements, I’m working on getting a number of missing parts to work on windows. One of this parts is the lack of network manager on Windows.

One of the things we need to know to coninusly sync you files on windows is to get an event when your network is present, or dies. As usual this is far easier on Linux than on Windows. To get this event you have to implement the ISesNetwork interface from COM that will allow your object to register to network status changes. Due to the absolute lack of examples on the net (or how bad google is getting ;) ) I’ve decided to share the code I managed to get working:

"""Implementation of ISesNework in Python."""
 
import logging
import logging.handlers
 
import pythoncom
 
from win32com.server.policy import DesignatedWrapPolicy
from win32com.client import Dispatch
 
# set te logging to store the data in the ubuntuone folder
handler = logging.handlers.RotatingFileHandler('network_manager.log', 
                    maxBytes=400, backupCount=5)
service_logger = logging.getLogger('NetworkManager')
service_logger.setLevel(logging.DEBUG)
service_logger.addHandler(handler)
 
## from EventSys.h
PROGID_EventSystem = "EventSystem.EventSystem"
PROGID_EventSubscription = "EventSystem.EventSubscription"
 
# sens values for the events, this events contain the uuid of the
# event, the name of the event to be used as well as the method name 
# of the method in the ISesNetwork interface that will be executed for
# the event.
 
SUBSCRIPTION_NETALIVE = ('{cd1dcbd6-a14d-4823-a0d2-8473afde360f}',
                         'UbuntuOne Network Alive',
                         'ConnectionMade')
 
SUBSCRIPTION_NETALIVE_NOQOC = ('{a82f0e80-1305-400c-ba56-375ae04264a1}',
                               'UbuntuOne Net Alive No Info',
                               'ConnectionMadeNoQOCInfo')
 
SUBSCRIPTION_NETLOST = ('{45233130-b6c3-44fb-a6af-487c47cee611}',
                        'UbuntuOne Network Lost',
                        'ConnectionLost')
 
SUBSCRIPTION_REACH = ('{4c6b2afa-3235-4185-8558-57a7a922ac7b}',
                       'UbuntuOne Network Reach',
                       'ConnectionMade')
 
SUBSCRIPTION_REACH_NOQOC = ('{db62fa23-4c3e-47a3-aef2-b843016177cf}',
                            'UbuntuOne Network Reach No Info',
                            'ConnectionMadeNoQOCInfo')
 
SUBSCRIPTION_REACH_NOQOC2 = ('{d4d8097a-60c6-440d-a6da-918b619ae4b7}',
                             'UbuntuOne Network Reach No Info 2',
                             'ConnectionMadeNoQOCInfo')
 
SUBSCRIPTIONS = [SUBSCRIPTION_NETALIVE,
                 SUBSCRIPTION_NETALIVE_NOQOC,
                 SUBSCRIPTION_NETLOST,
                 SUBSCRIPTION_REACH,
                 SUBSCRIPTION_REACH_NOQOC,
                 SUBSCRIPTION_REACH_NOQOC2 ]
 
SENSGUID_EVENTCLASS_NETWORK = '{d5978620-5b9f-11d1-8dd2-00aa004abd5e}'
SENSGUID_PUBLISHER = "{5fee1bd6-5b9b-11d1-8dd2-00aa004abd5e}"
 
# uuid of the implemented com interface
IID_ISesNetwork = '{d597bab1-5b9f-11d1-8dd2-00aa004abd5e}'
 
class NetworkManager(DesignatedWrapPolicy):
    """Implement ISesNetwork to know about the network status."""
 
    _com_interfaces_ = [IID_ISesNetwork]
    _public_methods_ = ['ConnectionMade',
                        'ConnectionMadeNoQOCInfo', 
                        'ConnectionLost']
    _reg_clsid_ = '{41B032DA-86B5-4907-A7F7-958E59333010}' 
    _reg_progid_ = "UbuntuOne.NetworkManager"
 
    def __init__(self, connected_cb, disconnected_cb):
        self._wrap_(self)
        self.connected_cb = connected_cb 
        self.disconnected_cb = disconnected_cb
 
    def ConnectionMade(self, *args):
        """Tell that the connection is up again."""
        service_logger.info('Connection was made.')
        self.connected_cb()
 
    def ConnectionMadeNoQOCInfo(self, *args):
        """Tell that the connection is up again."""
        service_logger.info('Connection was made no info.')
        self.connected_cb()
 
    def ConnectionLost(self, *args):
        """Tell the connection was lost."""
        service_logger.info('Connection was lost.')
        self.disconnected_cb() 
 
    def register(self):
        """Register to listen to network events."""
        # call the CoInitialize to allow the registration to run in an other
        # thread
        pythoncom.CoInitialize()
        # interface to be used by com
        manager_interface = pythoncom.WrapObject(self)
        event_system = Dispatch(PROGID_EventSystem)
        # register to listent to each of the events to make sure that
        # the code will work on all platforms.
        for current_event in SUBSCRIPTIONS:
            # create an event subscription and add it to the event
            # service
            event_subscription = Dispatch(PROGID_EventSubscription)
            event_subscription.EventClassId = SENSGUID_EVENTCLASS_NETWORK
            event_subscription.PublisherID = SENSGUID_PUBLISHER
            event_subscription.SubscriptionID = current_event[0]
            event_subscription.SubscriptionName = current_event[1]
            event_subscription.MethodName = current_event[2]
            event_subscription.SubscriberInterface = manager_interface
            event_subscription.PerUser = True
            # store the event
            try:
                event_system.Store(PROGID_EventSubscription, 
                                   event_subscription)
            except pythoncom.com_error as e:
                service_logger.error(
                    'Error registering to event %s', current_event[1])
 
        pythoncom.PumpMessages()
 
if __name__ == '__main__':
    from threading import Thread
    def connected():
        print 'Connected'
 
    def disconnected():
        print 'Disconnected'
 
    manager = NetworkManager(connected, disconnected)
    p = Thread(target=manager.register)
    p.start()

The above code represents a NetworkManager class that will execute a callback according to the event that was raised by the sens subsystem. It is important to note that in the above code the ‘Connected’ event will be fired 3 times since we registered to three different connect events while it will fire a single ‘Disconnected’ event. The way to fix this would be to register just to a single event according to the windows system you are running on, but since we do not care in the Ubuntu One sync daemon, well I left it there so everyone can see it :)


          Manuel de la Pena: IPC between python and c# with no DBus   

Sometimes on Linux we take for granted DBus. On the Ubntu One Windows port we have had to deal with the fact that DBus on Windows is not that great and therefore had to write our own IPC between the python code and the c# code. To solve the IPC we have done the following:

Listen to a named pipe from C#

The approach we have followed here is pretty simple, we create a thread pool that will create NamedPipe. The reason for using a threadpool is to avoid the situation in which we only have a single thread dealing with the messages from python and we have a very chatty python developer. The code in c# is very straight forward:

/*
 * Copyright 2010 Canonical Ltd.
 * 
 * This file is part of UbuntuOne on Windows.
 * 
 * UbuntuOne on Windows is free software: you can redistribute it and/or modify		
 * it under the terms of the GNU Lesser General Public License version 		
 * as published by the Free Software Foundation.		
 *		
 * Ubuntu One on Windows is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the	
 * GNU Lesser General Public License for more details.	
 *
 * You should have received a copy of the GNU Lesser General Public License	
 * along with UbuntuOne for Windows.  If not, see <http://www.gnu.org/licenses/>.
 * 
 * Authors: Manuel de la Peña <manuel.delapena@canonical.com>
 */
using System;
using System.IO;
using System.IO.Pipes;
using System.Threading;
using log4net;
 
namespace Canonical.UbuntuOne.ProcessDispatcher
{
    /// <summary>
    /// This oject represents a listener that will be waiting for messages
    /// from the python code and will perform an operation for each messages
    /// that has been recived. 
    /// </summary>
    internal class PipeListener : IPipeListener
    {
        #region Helper strcut
 
        /// <summary>
        /// Private structure used to pass the start of the listener to the 
        /// different listening threads.
        /// </summary>
        private struct PipeListenerState
        {
            #region Variables
 
            private readonly string _namedPipe;
            private readonly Action<object> _callback;
 
            #endregion
 
            #region Properties
 
            /// <summary>
            /// Gets the named pipe to which the thread should listen.
            /// </summary>
            public string NamedPipe { get { return _namedPipe; } }
 
            /// <summary>
            /// Gets the callback that the listening pipe should execute.
            /// </summary>
            public Action<object> Callback { get { return _callback; } }
 
            #endregion
 
            public PipeListenerState(string namedPipe, Action<object> callback)
            {
                _namedPipe = namedPipe;
                _callback = callback;
            }
        }
 
        #endregion
 
        #region Variables
 
        private readonly object _loggerLock = new object();
        private ILog _logger;
        private bool _isListening;
        private readonly object _isListeningLock = new object();
 
        #endregion
 
        #region Properties
 
        /// <summary>
        /// Gets the logger to used with the object.
        /// </summary>
        internal ILog Logger
        {
            get
            {
                if (_logger == null)
                {
                    lock (_loggerLock)
                    {
                        _logger = LogManager.GetLogger(typeof(PipeListener));
                    }
                }
                return _logger;
            }
            set
            {
                _logger = value;
            }
        }
 
        /// <summary>
        /// Gets if the pipe listener is indeed listening to the pipe.
        /// </summary>
        public bool IsListening
        {
            get { return _isListening; }
            private set
            {
                // we have to lock to ensure that the threads do not screw each
                // other up, this makes a small step of the processing to be sync :(
                lock (_isListeningLock)
                {
                    _isListening = value;
                }
            }
        }
 
        /// <summary>
        /// Gets and sets the number of threads that will be used to listen to the 
        /// pipe. Each thread will listeng to connections and will dispatch the 
        /// messages when ever they are done.
        /// </summary>
        public int NumberOfThreads { get; set; }
 
        /// <summary>
        /// Gets and sets the pipe stream factory that know how to generate the streamers used for the communication.
        /// </summary>
        public IPipeStreamerFactory PipeStreamerFactory { get; set; }
 
        /// <summary>
        /// Gets and sets the action that will be performed with the message of that 
        /// is received by the pipe listener.
        /// </summary>
        public IMessageProcessor MessageProcessor { get; set; }
 
        #endregion
 
        #region Helpers
 
        /// <summary>
        /// Helper method that is used in another thread that will be listening to the possible events from 
        /// the pipe.
        /// </summary>
        private void Listen(object state)
        {
            var namedPipeState = (PipeListenerState)state;
 
            try
            {
                var threadNumber = Thread.CurrentThread.ManagedThreadId;
                // starts the named pipe since in theory it should not be present, if there is 
                // a pipe already present we have an issue.
                using (var pipeServer = new NamedPipeServerStream(namedPipeState.NamedPipe, PipeDirection.InOut, NumberOfThreads,PipeTransmissionMode.Message,PipeOptions.Asynchronous))
                {
                    Logger.DebugFormat("Thread {0} listenitng to pipe {1}", threadNumber, namedPipeState.NamedPipe);
                    // we wait until the python code connects to the pipe, we do not block the 
                    // rest of the app because we are in another thread.
                    pipeServer.WaitForConnection();
 
                    Logger.DebugFormat("Got clien connection in tread {0}", threadNumber);
                    try
                    {
                        // create a streamer that know the protocol
                        var streamer = PipeStreamerFactory.Create();
                        // Read the request from the client. 
                        var message = streamer.Read(pipeServer);
                        Logger.DebugFormat("Message received to thread {0} is {1}", threadNumber, message);
 
                        // execute the action that has to occur with the message
                        namedPipeState.Callback(message);
                    }
                    // Catch the IOException that is raised if the pipe is broken
                    // or disconnected.
                    catch (IOException e)
                    {
                        Logger.DebugFormat("Error in thread {0} when reading pipe {1}", threadNumber, e.Message);
                    }
 
                }
                // if we are still listening, we will create a new thread to be used for listening,
                // otherwhise we will not and not lnger threads will be added. Ofcourse if the rest of the
                // threads do no add more than one work, we will have no issues with the pipe server since it
                // has been disposed
                if (IsListening)
                {
                    ThreadPool.QueueUserWorkItem(Listen, namedPipeState);
                }
            }
            catch (PlatformNotSupportedException e)
            {
                // are we running on an OS that does not have pipes (Mono on some os)
                Logger.InfoFormat("Cannot listen to pipe {0}", namedPipeState.NamedPipe);
            }
            catch (IOException e)
            {
                // there are too many servers listening to this pipe.
                Logger.InfoFormat("There are too many servers listening to {0}", namedPipeState.NamedPipe);
            }
        }
 
        #endregion
 
        /// <summary>
        /// Starts listening to the different pipe messages and will perform the appropiate
        /// action when a message is received.
        /// </summary>
        /// <param name="namedPipe">The name fof the pipe to listen.</param>
        public void StartListening(string namedPipe)
        {
            if (NumberOfThreads < 0)
            {
                throw new PipeListenerException(
                    "The number of threads to use to listen to the pipe must be at least one.");
            }
            IsListening = true;
            // we will be using a thread pool that will allow to have the different threads listening to 
            // the messages of the pipes. There could be issues if the devel provided far to many threads
            // to listen to the pipe since the number of pipe servers is limited.
            for (var currentThreaCount = 0; currentThreaCount < NumberOfThreads; currentThreaCount++)
            {
                // we add an new thread to listen
                ThreadPool.QueueUserWorkItem(Listen, new PipeListenerState(namedPipe, MessageProcessor.ProcessMessage));
            }
 
        }
 
        /// <summary>
        /// Stops listening to the different pipe messages. All the thread that are listening already will 
        /// be forced to stop.
        /// </summary>
        public void StopListening()
        {
            IsListening = false;
        }
 
    }
}

Sending messages from python

Once the pipe server is listening in the .Net side we simple have to use the CallNamedPipe method to be able to send messages to .Net. In my case I have used Json as a stupid protocol, ideally you should do something smart like protobuffers.

 call the pipe with the message
    try:
        data = win32pipe.CallNamedPipe(pipe_name, 
            data_json, len(data_json), 0 )
    except Exception, e:
        print "Error: C# client is not listening!! %s" % e.message

          Ubuntu chính thức hỗ trợ máy tính bảng   
Phiên bản mới nhất của hệ điều hành Ubuntu 14.04 LTS đã sẵn sàng sử dụng trên các dòng máy tính bảng, màn hình cảm ứng và màn hình độ phân giải cao.
          Ubuntu cho PC và smartphone chuyển sang giao diện phẳng   
Canonical đã cố gắng trong nhiều năm để biến Ubuntu thành hệ điều hành phổ biến cho nhiều loại thiết bị có kích thước khác nhau, có thể là TV, máy tính hoặc thậm chí là smartphone.
          Kích hoạt bộ gõ Tiếng Việt trên Ubuntu 13.10   
Để có thể gõ Tiếng Việt trên Ubuntu 13.10 bạn cần cài thêm ứng dụng hỗ trợ Unikey đi kèm với iBus có sẵn trên Ubuntu, mặc dù bản thân Ubuntu đã có phần hỗ trợ gõ Tiếng Việt nhưng rất khó sử dụng. Bạn có thể tìm thấy ứng dụng hỗ trợ Unikey trên hệ thống Ubuntu Software Center.
          Ngưng hỗ trợ 3 phiên bản Ubuntu Linux   
Canonical vừa quyết định giảm một nửa thời gian hỗ trợ cho Ubuntu Linux sau khi công ty công bố 3 phiên bản Linux phổ biến hiện nay sẽ ngưng hoạt động hỗ trợ vào tháng 5.
          Ubuntu sắp sửa ra mắt phiên bản di động?   
Trên trang chủ của Ubuntu, hiện đang xuất hiện dòng chữ: "So close, you can almost touch it" cùng đồng hồ đếm ngược cho tới ngày 3/1/2013 (theo múi giờ GMT+6).
          5 cách trải nghiệm Ubuntu trên máy tính của bạn   
Ubuntu có thể được khởi động từ USB hoặc ổ đĩa CD và sử dụng mà không cần cài đặt, thiết lập trong máy ảo, hoặc cài đặt song song với Windows. 5 cách thức dưới đây sẽ giúp bạn thực hiện điều này.
          10 lý do chọn Ubuntu 12.10 thay Windows 8   
Theo đánh giá, Ubuntu 12.10 hơn Windows 8 ở nhiều điểm như giao diện người dùng Unity, khả năng tuỳ biến, yêu cầu phần cứng, bảo mật… Sau khi hệ điều hành mã nguồn mở Ubuntu 12.10 “Quantal Quetzal” chính thức được công bố, trang chủ của Ubuntu xuất hiện khẩu hiệu đầy thách thức: “Avoid the pain of Windows 8”.
          4 cách tăng tốc cho Ubuntu   
Trong khi hầu hết người dùng Windows luôn tìm kiếm các giải pháp giúp cải thiện hiệu năng hệ điều hành thì người dùng Linux dường như không phải đối mặt với vấn đề này. Ubuntu là bản phân phối Linux được sử dụng rộng rãi nhất do giao diện đẹp và sức mạnh của nó. Nhưng ta vẫn có thể làm hệ điều hành chạy nhanh hơn nữa bằng một số cách đơn giản sau đây.
          Goobuntu: Ubuntu cho Google   
Hầu như ai cũng biết rằng Google sử dụng hệ điều hành nền Linux cho các máy tính để bàn cũng như server của hãng và một số còn biết rằng Ubuntu là lựa chọn dành cho máy bàn và đó chính là lý do vì sao nó được gọi là Goobuntu. Vậy chính xác thì vai trò của Ubuntu đối với Google là gì?
          Cài đặt Ubuntu trên Windows 8   
Bài viết sẽ hướng dẫn người dùng cài đặt hệ điều hành Ubuntu trên nền Windows 8 bằng phần mềm tạo máy ảo Hyper-V tích hợp sẵn trong Windows 8.
          Hướng dẫn truy cập file hệ thống exFAT trong Ubuntu   
Linux hay cụ thể là Ubuntu hỗ trợ nhiều hệ thống file. Nếu bạn cắm một ổ USB, hệ điều hành sẽ nhận dạng nó ngay lập tức và mở USB trong trình quản lý file. Tuy nhiên, nếu ổ cứng ngoai ở định dạng exFAT, thì máy tính sẽ không thể dò thấy thiết bị do không được hỗ trợ cục bộ định dạng này.
          Cài đặt tính năng Web App trên Ubuntu Precise   
Web App là một trong số những tính năng mới xuất hiện trên Ubuntu 12.10 sắp tới. Tính năng này cho phép các website, các ứng dụng và dịch vụ web được tích hợp vào chính màn hình desktop của Ubuntu để truy cập như một tính năng Ubuntu gần gũi từ panel, Unity Dash, hud, thực đơn thông báo, thực đơn âm báo…
          Kích hoạt Applet chỉ thị trên giao diện Unity của Ubuntu   
Nếu từng có thời gian sử dụng Ubuntu chắc hẳn bạn không thể quên những applet trong GNOME-là những biểu tượng nằm trong panel và cung cấp thông tin hay truy cập điều khiển. Trong trường hợp thiếu những applet này, hãy thử cài đặt applet trung gian cho giao diện màn hình Ubuntu Unity.
          Linux   
Jak zainstalować "linux"krok po kroku Ubuntu Oneiric Ocelot 11.10
          6 different ways to run an asp.net core web application   

<blink> Gratuitous self promotion: Joseph Cooney and I will be talking about running asp.net core on Linux, at the upcoming DDD Brisbane conference at 4:05 pm, 3rd of December, less than 3 weeks from now. </blink>

Now that you've suffered through the advertisement, here's some content.

PLEASE tell me if I say anything misleading in what follows... if I'm going to stand in front of people and pretend to be worth listening to, I want some rigorous vetting to occur first.

Tell me leon, what are all the ways you can run an asp.net core web site?

Well I don't know all the ways, but I do know 6 different ways!

Get your head around this lot (even if it requires extra background reading) and you'll understand a lot about how asp.net core sites work.

  1. Visual Studio F5

If you're developing an asp.net core website in Visual Studio, then you might run it by pressing F5, for debugging purposes. But that's not the only show in town...

  1. Commandline "dotnet run"

Your website is really a dotnet console app, that self-hosts a website using a tiny webserver called Kestrel. (There's a lot to unpack in that sentence, but just let it wash over you for now)

You can run it, from the console, by calling dotnet run from the folder that contains the project.json file.

The output in the console will say something like:

Now listening on: http://localhost:2000

So if you then browse to http://localhost:2000, you'll see your website (and the console will show logging info about your visit)

  1. dotnet publish → cd bin{...}\publish → dotnet YourProject.dll

On your local machine, you can prepare the application for deployment by running "dotnet publish". This builds the application artifacts, does any minification and so forth.

If you don't specify where the published results go they will end up in YourProject\bin\debug\netcoreapp1.0\publish

If you go into that folder you can run the resulting artifacts by calling:

dotnet YourProject.dll

Note that you don't call "dotnet run YourProject.dll" -- leave out the run for this one!

So the commands in full (starting in the folder that contains the project.json file)

dotnet publish
cd bin\debug\netcoreapp1.0\publish
dotnet YourProject.dll
  1. IIS

You can host it in IIS. I've never done this and don't intend to. Me and IIS are parting ways for now. But it can be run by IIS. More info here: Publishing to IIS and here: Publishing to IIS with Web Deploy using Visual Studio.

  1. Running on Linux, from the console.... "dotnet YourProject.dll"

You can grab the artifacts from your local computer's "publish" folder (created in step 3), and copy them onto a Linux machine (using a technique such as SSH, scp, sftp). Then you can run it in the console, exactly the same as step 3:

dotnet YourProject.dll

(This assumes that you have have .net core installed on that linux machine already, instructions here.)

From a different console attached to the same machine, you can view the website by running, for example:

curl http://localhost:2000

...which isn't the most comfortable way to surf the internet. But since our webapp isn't accessible from the open internet, it's about the best you can do at that point.

Also, as soon as that first console window is closed, the application will stop. So this is not your final production technique. For that....

  1. Running properly on linux, with supervisor + nginx

In Linux you can configure supervisor to run your application (and keep it running). This is analogous to the work that Application Pools do in Windows land.

And nginx is a popular webserver, analogous to using IIS on Windows. The two work together to run your application and deliver webrequests to it. You set up nginx to receive requests from the internet and pass them on to your application (i.e. to "proxy them" through to your application, also know as acting as a 'reverse-proxy')

Details about using supervisor, at TIL.secretGeek.net:

To learn how to configure nginx to proxy requests through to your application, try the article here:

With those in place, you can browse to your site from the internet (assuming you purchased a domain and configured it to point to the webserver, or perhaps you are browsing by IP address, like all hardcore nerds.)

Okay, that's 6 different ways to run your asp.net core web app.

(You can swap nginx for some other webserver like Apache, but I'm not counting that as a separate method, just a variation on number 6.)

(And you can user systemd and upstart instead of supervisor: notes here.)

What did I get wrong!?

Update: Some answers to this question have come in already...

I wrote Katana instead of Kestrel -- fixed.

You can of-course also host a .net core app inside an MS Word Macro.

I left out Azure. You can deploy .net core apps to Azure, and if that's something you're interested in doing, I think this article covers it nicely: Deploy ASP.NET Core 1.0 apps to Azure web apps.

Just kidding about the Word Macro.

Further reading

This document brings together documents on each deployment method: asp.net core: Publishing and Deployment.


          Linux Mint 17 KDE Overview & Screenshots   

Linux Mint 17 ‘Qiana’ KDE and Xfce editions were released late last month, just a few weeks after the main editions (Cinnamon and MATE) were put out. This release will have the same lifespan as the distribution which is based on, Ubuntu 14.04 Trusty Tahr, so it will be supported until 2019, for no less than five years.


          Free, Open-Source Linux Games Part IV: 5 More   

This is the fourth article in a series covering completely free and open-source games available for Linux, usually included in any of the popular distributions. These games are all included in the Ubuntu repositories, so you can install them with APT.


          The Latest Midori Browser Has Rewritten AdBlock, WebKit Improvements (Ubuntu Installation)   

Midori is a GTK-based browser with a clean interface that resembles the one of Google Chrome, using the WebKit rendering engine, and offering plenty of the usual features browsers like Firefox or Chrome ship with.


          A Look at Sunflower File Manager [Ubuntu Installation]   

Although it describes itself as a minimalist file manager, I have only words of praise for Sunflower, since it rather gathers a lot of features in a compact interface, and I do believe it needs a bit more attention.

read more


          How To Create a Linux Mint Persistent Live USB   

Because of its Ubuntu base, Linux Mint shares a lot of the same great features with its parent distribution while offering a more traditional desktop design. One big feature that Linux Mint is missing though is the ability to create a Live USB stick with persistent storage. In this tutorial I'll show how to create a Linux Mint Persistent Live USB drive using UNetbootin and GParted.


          ESRT @hlixaya @Sourcefire - Snort 2.9.1 Installation Guide for Ubuntu 10.04 LTS has been posted ...   
ESRT @hlixaya @Sourcefire - Snort 2.9.1 Installation Guide for Ubuntu 10.04 LTS has been posted http://www.secuobs.com/twitter/news/246374.shtml
          Reddit: Looking for an Ubuntu Unity close cousin? Elementary, my dear... - Ubuntu's Unity interface is gone, which means there's one less desktop to choose from in Linux-land. And while dozens remain to choose from, Unity was one of the most polished out there...   
submitted by /u/--Kai--
[link] [comments]
          Reddit: Question: Has any linux install ever worked 'perfectly' for any of you?   

Been playing around for about 8 years. The closest I've ever gotten to 'good enough' was probably Ubuntu 12.04 on a dell laptop like 4-5 years ago. I've tried dozens of distro-machine combos (and many more reinstalls) but none have kept me from going back to Microshaft.

No install has ever worked anywhere close to perfectly, to the point where its ease of use could make it my primary productive workspace... I've ALWAYS encountered bugs upon bugs on every single machine - random crashes, slow-downs, and a plethora of non-functionality and annoyances that keep me from making the best of my time.

Is it just me? Are there any "stable" releases out there that are truly STABLE that anyone would care to suggest? Thank you

Edit: Really just trying to get discussions going about OS stability (fueled by my history of frustration)

submitted by /u/mockfry
[link] [comments]
           Sharp Netwalker PC-T1 - Excellent Condition!   
I am selling Sharp Netwalker PC-T1 English GUI - Excellent Condition with Box on eBay!

Perfect for Linux/Ubuntu/Zaurus fans!

Spec: CPU: Freescale i.MX515 (ARM Cortex-A8) 800MHz
512MB RAM
1x 8GB Flash
1x microSDHC
1x USB 2.0 (for connecting mouse, keyboard etc.)
1x Mic Stereo In
1x Audio Stereo Out
1x Optical Pointing Device (on top right)
1024x600 pixels TFT LCD touchscreen display

Weight: 280 grams (0.6 pounds)
Wireless: 802.11 b/g, Bluetooth 2.1 + EDR
OS: Ubuntu 9.04 (Sharp customized)
Power Consumption: MAX 8.7W

Listing comes with case, stylus, original charger (AC 100-240V 50/60Hz - DC 5V 2.5A) and box.

http://www.ebay.com/itm/162267541983

          AMD Fusion E-450 ultraportable round-up   
AMD Fusion. A few months ago we were lamenting the fact that despite the AMD Fusion accelerated processing unit being an excellent platform to build a low cost ultraportable laptop on, there were few options available. Now, there are a plethora of options for lightweight ultraportable AMD Fusion options in the market. 

While there are the older E-350 units still avialable, and the slower C-50 and C-60's the one we recommend the AMD Fusion E-450. The E-450 has a dual core processor clocked running at 1.65GHz which is clocked just a bit higher than 1.60GHz of the older E-350. What makes a bigger difference in performance is that the E-450 supports DDR-1333 RAM, as opposed to the DDR-1066 supported by the older E-350 chip. However, the E-450 also comes with HD6320 graphics which has the ability to turbo boost its speeds by another 20% when needed.  All in all, the E-450 is a much faster platform than the older E-350.

Samsung NP305

Basically a AMD Fusion E-450 chip is fast enough to do the tasks you would require from an ultraportable, play 1080p video smoothly and do some light 3D gaming. Eight months ago we had a choice between an HP and a Sony. Now options from Asus, Dell, Fujitsu and Samsung are available on the local market.  

Price:

  • Dell Inspiron M102z - Php21,900 with Ubuntu Linux O.S.
  • HP Pavillion DM1 - Php22,900 with Windows 7 Starter
  • Asus Eee PC 1215B - Php22,990 with Windows 7 Starter
  • Samsung Series 3 - Php23,900 with Windows 7 Home Basic

  • Fujitsu Lifebook PH512 - Php26,995 with Windows 7 Home Basic
  • Sony Vaio YB - Php26,900 with Windows 7 Starter


Prices may vary from seller to seller. I included the bundled operating system, since in considering the price differences, you should also consider whether you intend to spend a little more cash upgrading the operating system. 


Asus Eee PC 1215B

Battery:

  • Asus Eee PC 1215B - 5200 mAh
  • Dell Inspiron M102z - 5000 mAh
  • Fujitsu Lifebook PH512 - 4800 mAh

  • HP Pavillion DM1 - 4400 mAh 
  • Samsung 
    NP305
     - 4000 mAh
  • Sony Vaio YB - 3500 mAh


All these units have the same internals, so battery size is good indication of battery life. Notably, the Asus 1215B has a 12.1-inch screen while the other units have smaller 11.6-inch screens. The larger screen would consume a little more juice. The Asus, Dell and Fujitsu are you best choices for endurance.

Weight:

  • Samsung NP305 - 2.7 lbs.

  • Sony Vaio YB - 
    3.2 lbs.
  • Fujitsu Lifebook PH512 - 3.2 lbs.
  • Asus Eee PC 1215B - 3.4 lbs.
  • Dell Inspiron M102z - 3.4 lbs.
  • HP Pavillion DM1 - 3.4 lbs. 

Only the Samsung Series 3 has a significant difference in weight. It is also the smallest of the bunch being more like the size of a 10-inch notebook.

Others:

  • Asus Eee PC 1215B - It is the only one equipped with a USB 3.0 port.
  • Dell Inspiron M102z - Comes with 4GB of RAM, the others only comes with 2GB. Also comes with SRS surround sound.
  • Fujitsu Lifebook PH512 - Has a 640GB hard drive, most of the others have 500GB hard drives.
  • HP DM1 - Comes with Beats Audio
  • Sony Vaio YB - The Sony Vaio YB comes with 384MB of dedicated video memory. Only comes with a 320GB hard drive, most of the others have 500GB hard drives. 
  • Samsung NP305 - Smallest of the bunch, meaning it also has the narrowest keyboard.


Which is the best? 

Each of the offerings has its merits. The Samsung NP305 is an easy choice for those looking for the most portable of the offerings. It is much lighter (0.5 to 0.7  pounds lighter) and smaller and slimmer it is the easiest to slip into a handbag and the kindest on the shoulder.

The Asus Eee PC 1215B  is easily the most capable. It has a larger 12.1-inch screen but is not significantly heavier than the other choices, except for the Samsung. It also has the much desired USB 3.0 port which allows for data transfer 10x faster than USB 2.0, for compatible devices. 

The other options all have their strengths. The Sony Vaio YB has the fastest graphics performance  while the HP Pavillon DM1 has the best sound.  The Dell Inspiron M102z has the most RAM while the Fujitsu Lifebook PH512 has the most storage. 

In the end, there are a nice set of AMD Fusion ultraportable in the market, each with its own merits. 

          Shadowsocks 原理简介及安装指南   

对 Shadowsocks 早有耳闻,当时我还在用 HTTP 代理、VPN 服务等翻墙,感觉它是个比较高大上的东西,也一直没有碰它。最近 GreenVPN 抽风,Mac 一直连接不上,害得我折腾了很久,最后还是买了一台国外的 VPS,于是开始折腾起 Shadowsocks,部署之前,对它做了一个简单的了解,下面先介绍下。

来源: http://www.barretlee.com/blog/2016/08/03/shadowsocks/
作者推特: https://twitter.com/barret_china
作者GitHub:https://github.com/barretlee
众所周知,天朝局域网通过 GFW 隔离了我们与外界的交流,当然,这个隔离并非完全隔离,而是选择性的,天朝不希望你上的网站就直接阻断。每一个网络请求都是有数据特征的,不同的协议具备不同的特征,比如 HTTP/HTTPS 这类请求,会很明确地告诉 GFW 它们要请求哪个域名;再比如 TCP 请求,它只会告诉 GFW 它们要请求哪个 IP。
GFW 封锁包含多种方式,最容易操作也是最基础的方式便是域名黑白名单,在黑名单内的域名不让通过,IP 黑白名单也是这个道理。如果你有一台国外服务器不在 GFW 的黑名单内,天朝局域网的机器就可以跟这一台机器通讯。那么一个翻墙的方案就出来了:境内设备与境外机器通讯,境内想看什么网页,就告诉境外的机器,让境外机器代理抓取,然后送回来,我们要做的就是保证境内设备与境外设备通讯时不被 GFW 怀疑和窃听。
ssh tunnel 是比较具有代表性的防窃听通讯隧道,通过 ssh 与境外服务器建立一条加密通道,此时的通讯 GFW 会将其视作普通的连接。由于大家都这么玩,GFW 着急了,于是它通过各种流量特征分析,渐渐的能够识别哪些连接是 ssh 隧道,并尝试性的对隧道做干扰,结果还是玩不过 GFW,众多隧道纷纷不通。
如果你理解了上面那道隐形的墙的原理,那 Shadowsocks 的原理就可以用一句简单的描述来理解了:它发出的 TCP 包,没有明显包特征,GFW 分析不出来,当作普通流量放过了。
1. 基本原理
具体而言,Shadowsocks 将原来 ssh 创建的 Socks5 协议拆开成 Server 端和 Client 端,两个端分别安装在境外服务器和境内设备上。

+------+ +------+ +=====+ +------+ +-------+


| 设备 | <-> |Client| <-> | GFW | <-> |Server| <-> | 服务器 |


+------+ +------+ +=====+ +------+ +-------+

Client 和 Server 之间可以通过多种方式加密,并要求提供密码确保链路的安全性。
2. 服务器端部署
Shadowsocks 封装后对用户而言就是一个程序指令,以 Ubuntu 为例,首先安装 pip,

apt-get install python-pip


pip install shadowsocks

注意 pip 的安装现在要求 python 版本大于等于 2.6,然后通过 pip 安装 shadowsocks。启动 shadowsocks 有两种方式,一种是通过一行命令直接启动:

ssserver -p PORT -k PASSWORD -m rc4-md5 --log-file /tmp/ss.log -d start

另一种是使用 config 文件启动,如先配置好文件(/etc/shadowsocks.json):

{


"server": "YOUR_SERVER_IP",


"server_port": 8388,


"local_address": "127.0.0.1",


"local_port": 1080,


"password": "PASSWORD",


"timeout": 300,


"method":"aes-256-cfb",


}

然后通过 ssserver 启动:

ssserver -c /etc/shadowsocks.json -d start

更加具体的配置说明,请戳 这里 和 这里
3. 客户端配置
Mac 客户端的下载地址:

配置位置:
配置1
配置方式:
配置2
相关说明:Shadowsocks for OSX 帮助
iPhone 设备可以选择 shadowrocket 客户端,需要 6 元购买,它的好处是,能够持续保持连接,休眠状态也不会断开,并且内置了规则,一些需要翻墙的域名才会自动翻墙。
刚开始在配置 ss-serser 的时候,我遇到了些问题,本地死活代理不成功,后来通过下面这种方式调试了下:
1.客户端通过 telnet ip port 确认 ss-server 是否正常开启
如果没有正常开启,有可能是设定的端口没有开放,

iptables -A INPUT -p tcp --dport 8388 -j ACCEPT

执行上述命令,将 8388 修改为你设定的端口即可。
2.如果第一步中连接正常,可以查看下 ss-server 的日志

ssserver -c /etc/shadowsocks.json --log-file /tmp/ss.log -d start

启动的时候添加 --log-file 参数,然后通过 tail -f /tmp/ss.log 查看实时日志,一般可以看出一点端倪。
本文的普及就到这里了,希望对你有些帮助。

          如何免费打造一个安全稳定超高速的科学上网环境   

我是 HyperApp 的作者,很久前就有用户一直催我要我写一些详细的图文教程,但我一直没抽出时间(懒)今天终于憋出来一篇文章,就讲讲大家最关心的话题吧 … 


来源:https://sspai.com/post/39361
作者:http://weibo.com/waylybaye
Twitter:https://twitter.com/waylybaye
开发者 GitHub:https://github.com/waylybaye
HyperApp 用户手册:https://github.com/waylybaye/HyperApp-Guide
相关阅读:HyperApp 让萌新也可以在云主机上自动化部署应用

这篇文章将会介绍如何获取由 Google 提供的一年 $300 美金的试用金,然后使用 HyperApp 搭建一个低延迟超高速的科学上网环境。网络延迟基本在50ms左右,并且可以流畅观看 Youtube 4K 视频。本文不需要读者有较深的技术背景,全部过程都可视化、自动化完成。

下文用 GCP 代表 Google Cloud Platform,指 Google 的整个云平台。GCE 是指 Google Cloud Engine,是 GCP 产品线里的一个主机产品。

本文前提条件

  1. 有一个 Google 账号,没有的话可以注册一个。
  2. 注册 GCP 免费试用需要用信用卡进行身份验证(只做验证,不会收费),所以你必须有一个 Visa/MasterCard 的信用卡才行。如果你是一位学生还没有信用卡,可以查看文末的学生优惠计划。

GCP 注册以及创建服务器

要访问 Google 首先需要你能科学上网,但是这篇文章又是介绍如何科学上网的,好像有点🐔生蛋蛋生🐔的矛盾… 其实你可以去App Store搜索 V屁恩,找一个免费的下载后,通过签到等方式暂时领取一天或者几个小时的免费低速试用来完成下面的教程。

注册 GCP 免费试用








GCP Free Tier
GCP Free Tier

  1. 登录Google账号后使用这个链接来注册 https://cloud.google.com/free/ ,在打开的页面中点击Try it Free
  2. 接受条款,并点击同意并继续
  3. 在页面中填入你的信息:
    • 账号类型:个人
    • 名称和地址:填写你的地址、电话等
    • 付款方式:添加一个信用卡。这个信用卡将会用来验证身份,防止GCP被滥用。
    • 点击 开始免费使用 完成注册
在后面的页面中,如果你能看到页面顶部有一个“礼物 🎁 ” 的小图标,说明已经获取了试用金。

新建主机








创建服务器
创建服务器

  1. 在左侧菜单中导航到 计算引擎 → VM 实例
  2. 点击加号按钮,创建一个 VM 实例。
  • 名称:随意填入一个易记得名字
  • 地区:建议选择 asia-east1-* 三个中的任意一个,这个机房是在台湾,国内的延迟只有 50~70ms,简直快到飞起(b区的好像问题有点多,可以先尝试a,c区)。
  • 机器类型:选择 “小型” (1.7G 内存) 就可以了。默认选中的 3.75G 的内存其实用不完。(只用SS的话建议选最低配置的,这样每月大约可以留出来80G流量)
  • 启动磁盘:默认的Debian 8就可以,推荐 Ubuntu 16.04 LTS。另外为了防止将来磁盘不够用,你可以点击右下角的更改,大小里使用 20G 或者 30G。
  • 防火墙:选中 “允许HTTP流量” 和 ”允许HTTPS流量“
    点击创建,稍等几分钟就会创建完毕,现在打开 HyperApp 开始配置这台服务器吧。

HyperApp

HyperApp 是一个部署自动化以及服务器监控管理的App,致力于让普通用户也可以使用云服务。这是我的另一篇介绍文章 HyperApp 让萌新也可以在云主机上自动化部署应用
本段介绍如何使用 HyperApp 管理刚刚创建的服务器,开启 BBR 加速,安装科学上网应用。







null

添加服务器








自动配置服务器
自动配置服务器

  1. 在 服务器 页面点击右下角的加号,然后选择第二个自动配置
  2. 点击开始,应用会自动生成一对密钥,大约耗时3~10秒。(如果你看不到开始按钮,可以暂时进入 设置 → 通用 → 更大字体 将字体调为最小然后重启App)
  3. 等到出现“一切就绪”时,点击复制将代码复制到剪贴板里,如果你用电脑操作的话可以点击发送将代码通过任意一种方式发送到电脑上。







在Web SSH里面运行命令
在Web SSH里面运行命令

在 GCE 计算引擎 → VM 实例 页面,选择刚刚创建的实例的 SSH ▽ 在浏览器窗口中打开 打开一个基于 Web 的 SSH 终端。
在浏览器的SSH中粘贴并运行刚刚复制的代码。等到二维码出现时,使用 HyperApp 扫描该二维码便会自动添加该服务器。添加后 HyperApp 会检测刚刚添加的服务器的基本运行状况。

永久添加公钥

注意:你可以暂时跳过本小段,进行后续的操作,如果 HyperApp 出现了认证错误的提示,可以回到这段进行操作。
刚刚的自动配置的功能在几乎所有主机上都可以用,但是 GCE 是个例外,它会清空用户自行添加的公钥,所以几分钟后可能就会出现密钥认证失败的错误。你可以通过以下步骤永久性的设置密钥。
  1. 打开 更多 → SSH Keys → 点击唯一的一个 Key → 复制公钥
  2. 在GCP 中打开 计算引擎 → 元数据 → SSH 密钥 点击修改后,复制并添加刚刚的公钥。
  3. 添加后页面会出现一行新记录,主要有两列:用户名 和 密钥。然后打开 HyperApp → 服务器 → 点击服务器下面的齿轮按钮 → 修改用户名为刚刚显示的用户名,保存后就可以了。(如果你按照上面的步骤操作,HyperApp 自动修改用户名所以这两个用户名应该默认是一致的)
  4. 添加以后同一账号下所有新建的主机都可以不用再次操作了。
  5. 如果还是验证失败请确保下图中的用户名一致。
    这两个用户名要保持一致
    这两个用户名要保持一致

开启 BBR 加速,跑满带宽

这一步并不影响科学上网,但是会极大的提高上网质量。BBR是 Google 开发的TCP拥塞控制技术,并且已经合并到较新的Linux内核中。它的主要作用是可以让你跑满服务器的带宽。
比如说没有开启的情况下你观看 Youtube 720P的视频都可能会卡,但是开启了BBR后 1080P 的视频完全无卡顿,就连 4K 视频也可以流畅观看(有些网络特别卡的除外)。
要开启 BBR 需要升级 Linux 内核,不过不用急,在 HyperApp 中开启非常简单:







BBR
BBR

  1. 在服务器卡片中点击右上角的 Terminal 图标(就是那个长的像个表情 [ >_ ] 的按钮),进入 SSH 终端。
  2. 选择最下面的一行工具栏的第一个图标,然后点击 "teddyun/BBR" 一键脚本的链接,点击后会让你确认是否下载并执行外部脚本。
  3. 点击确认后会自动下载并执行,执行过程中需要点击键盘上回车键确认继续。如果想中断执行,那么点击 ctrl 后按 c 终止执行。
  4. 稍等几分钟,等待系统内核升级到最新版后会自动重启(重启时终端会显示Shell Closed)然后关闭窗口即可。
  5. 如果你想确认 BBR 有没有安装成功,那么再次进入 SSH 终端,输入 lsmod | grep bbr 如果能看到一行记录就是成功了。
注:这是系统层面的升级,执行完毕后不必对其它应用做任何配置。客户端也不需要做任何配置。
一定要先升级BBR再安装应用,反过来会出错

部署科学上网应用








创建、配置应用
创建、配置应用

  1. 在 商店 页面 网络 分组下选择任意一个应用,这里我们选择占用资源最少的 ****-libev,(下文用SS代替)点击该应用。在弹出的对话框中选择刚刚添加的服务器(主机名应该是“AutoConfig”,是的,注意这个AutoConfig是主机名字,不是其它功能…),点击创建应用
  2. 在出现的配置页面中输入简单的一些配置:
  • Port: 端口,可以填入 80 或者 443(因为用其它端口需要设置防火墙)
  • Password:随便填入一个密码
  • Encrypt:选择一种加密方式,推荐针对移动访问优化的 chacha20
  • OBFS: 使用 OBFS 可以将 SS 流量伪装成正常的网页访问从而达到欺骗效果,可以防止被墙或者运营商干扰。另外还有人利用此功能实现“免流”效果:运营商会对一些域名免收流量费,于是可以将所有SS流量伪装成对某个域名的访问从而达到免流效果,具体此处不表。可以暂时忽略这个选项。
保存后在应用卡片中点击选中的服务器,然后选择安装就可以了。整个过程可以看👇的GIF动画:







应用创建、配置、安装的动画
应用创建、配置、安装的动画

  • 如果你使用其它端口,请参考文末的 设置GCE防火墙 如何开启防火墙。
  • 如果安装的过程中出现错误,可以截图后进群寻求帮助,但最快的方式依然是:建一个新的vm重来一遍!真的,很神奇的!

客户端设置

嗯,服务端这就安装好了,不过在手机及Mac/PC上想要使用还需要有个客户端。

iOS

iOS 上有很多 SS 的客户端可供选择,收费的有 小火箭、土豆丝、Cr*、Sur 等。免费的可以使用 Wingy
配置很简单,只需要点击应用卡片里的服务器栏,然后选择顶部的QR,截图后用其它客户端扫码即可。
如果你想手动配置,请按指示填写参数即可,下面和配置和上面 【部署科学上网应用】段落里第2步的配置一一对应:
  • 服务器:填入你的服务器外部 IP(GCP后台 VM列表页面有一列外部IP,就是那个)。
  • 端口:上面配置界面里面的端口(80,或者443)
  • 密码:上面配置界面里面的密码
  • 加密方式:上面配置界面里面的加密方式

Mac/Windows

Mac 和 Windows 上都有免费的 SS 客户端可以使用,配置方法也是只要输入你的IP、端口、密码和加密方式即可。

其它玩法

现在你有了一台 1.75G 内存的服务器了,只用来做SS好像有点浪费(SS只占用几M内存),其实使用 HyperApp 还有很多其它玩法,商店里面有很多其它应用,都可以自动安装配置。
比如你可以自己搭建一个博客、网站、论坛,个人网盘,聊天服务,并且可以自动配置HTTPS。如果你玩游戏可以创建一个 Minecraft 服务器。或者部署一个接收微信消息转发到Telegram的机器人。更多信息可以参阅👇 的 HyperApp 文档和教程。

HyperApp 支持

如果你遇到了各种各样的技术问题,比如无法安装、无法连接等,可以在群里召唤机器人或者开发者进行解答。或者在App里面点击 发送反馈邮件 获取帮助,这两种是最快的获取帮助的方式。

自建和购买商业服务对比有什么优势?

  1. 最主要的优势是隐私和安全,如果你看下上面SS的日志,你就知道服务商可以知道你的所有浏览历史的,如果你访问了不支持HTTPS的网站,那么请求内容也可能被监控(比如密码信息)。
  2. 另外是质量和成本,很多商家是使用和上面同样的机器但是卖给几百个人,你应该能明白了。成本方面没有免费试用的话1个人用可能会有点贵,但如果和朋友家人一起用就超值了,比如使用$2.5/月的 Vultr,每月500G 流量够很多人用的。

学生优惠计划

设置 GCE 防火墙

  1. 在GCP后台点击 网络 → 防火墙规则
  2. 点击 创建防火墙规则
    • 名称:随便输入一个名称
    • 目标:选择 网络中的所有示例
    • 来源过滤:0.0.0.0/0
    • 协议和端口:指定的协议和端口 下面输入 tcp;udp:端口号

常见问题:

如何建多个账号?

你可以创建多个应用给不同的人使用,一个应用只占 1~2M 内存(但注意每个应用的端口必须不同)

信用卡被扣一美元?

这是验证信用卡信息是否正确的,快则几分钟慢则几小时就会退款。

结算账号被关闭怎么办?

你可能没仔细填信用卡信息,查看你的邮箱应该会有一封谷歌的通知邮件,按照提示传下资料,几个小时左右就会被解封了。

          DNSCrypt 在 Windows 和 Ubuntu 下的配置   

因为我只用 Ubuntu 和 Windows 10,所以在这俩系统下完成 DNSCrypt 的配置对我来说就足够。
来源: https://plumz.me/archives/1760/

Ubuntu 下可以添加 PPA 安装 dnscrypt


sudo add-apt-repository ppa:anton+/dnscrypt
sudo apt-get update
sudo apt-get install dnscrypt-proxy
安装完毕后先修改一下,因为默认用的不是 OpenDNS 的服务器。
sudo vim /etc/default/dnscrypt-proxy
选区_782.png
在里面注释掉原本的 DNS 服务器,解除上面 OpenDNS 的服务器注释,当然你不这么做也可以,只是可能速度有点慢。
选区_783.png
之后记得自己把 DNS 修改为 127.0.0.2 ,每个无线 AP 记得都得改一次。
相比之下 Windows 下的 DNSCrypt 简单到爆炸。
直接使用 dnscrypt-winclient 即可。
而且可以直接下载:
https://github.com/Noxwizard/dnscrypt-winclient/blob/master/binaries/Release/dnscrypt-winclient.exe (点Download 下载)
同时你还得下载 DNSCrypt 的基本文件:
https://download.dnscrypt.org/dnscrypt-proxy/ (选最新版) 
将上面下载的 dnscrypt-winclient.exe 放入解压好的基本文件目录里,右键管理员运行即可。
68747470733a2f2f7261772e6769746875622e636f6d2f4e6f7877697a6172642f646e7363727970742d77696e636c69656e742f6d61737465722f73637265656e73686f742e706e67.png
图形界面简单明了,记得服务器选择 cisco OpenDNS,之后把自己 Windows 网络链接 TCP/IP 的 DNS 修改为 127.0.0.1 即可。
顺便记得吧 dnscrypt-winclient 里面的 Install as service 点击一下,之后就不用每次重启再运行了。

          新的 SSR 多用户管理脚本   
新的SSR多用户管理脚本之前写了一个SSR-Bash项目,用于管理SSR多用户多端口的。后来出了很多BUG。有人说要稳定,不求功能更新。有人说要不断增加新功能。于是我有点蒙蔽。后来遵循破瓦大神的指示,研究了下SSR自带的mujson模式,于是新开了一个坑。这个版本的管理脚本会比之前的更加稳定。基本上都是简单的调用mujson_mgr.py就可以了。要说技术含量也没有多少,只是写着玩玩方便下大家而已。刚好我新站点 www.zhujiboke.com 缺少点文章素材,于是拿来水一篇文章。
来源: https://www.zhujiboke.com/2017/03/278.html

项目地址:https://github.com/FunctionClub/SSR-Bash-Python
新的SSR多用户管理脚本
新的SSR多用户管理脚本

安装脚本

(请使用 Xshell 连接以取得最好的中文编码支持)

  1. wget -N --no-check-certificate https://raw.githubusercontent.com/FunctionClub/SSR-Bash-Python/master/install.sh && bash install.sh


安装完成后输入 ssr 回车即可使用

卸载脚本


  1. wget -N --no-check-certificate https://raw.githubusercontent.com/FunctionClub/SSR-Bash-Python/master/uninstall.sh && bash uninstall.sh


系统支持

  • Ubuntu 14
  • Ubuntu 16
  • CentOS 6
  • CentOS 7
  • Debian 7
  • Debian 8(推荐)

功能介绍

  • 一键开启、关闭SSR服务
  • 添加、删除、修改用户端口和密码
  • 自由限制用户端口流量使用
  • 自动修改防火墙规则
  • 自助修改SSR加密方式、协议、混淆等参数
  • 自动统计,方便查询每个用户端口的流量使用情况
  • 自动安装Libsodium库以支持Chacha20等加密方式
  • 每月自动清空用户流量
  • 一键封禁BT下载、垃圾邮件发送功能。(感谢逗比大佬提供)

与上一版改进

  1. 支持每个端口单独自定义加密方式,混淆,协议。
  2. 暂时支持了部分兼容协议。
  3. 支持CentOS 系列系统。
  4. 每月自动清空流量使用记录。
  5. 分别记录上传流量和下载流量。
  6. 动态管理用户,每一次更改用户不会影响原有用户端口。
  7. 恢复ShadowsocksR所支持的兼容模式。
  8. 增加返回上一级菜单功能。
  9. 支持为每一个端口添加不同协议参数与混淆参数。

已知的问题

  • 未添加开机自动启动
  • 最后一名用户无法删除
  • 部分系统上IP地址识别错误,导致SSR链接生成有问题,手动修改即可。

常见问题

问1:是否需要自己先安装SSR服务端?
答1:不需要,脚本默认自带了安装SSR的部分。请使用纯净的系统进行安装。
问2:是否能和Oneinstack一起安装?
答2:原则上是可以的,但是并不建议放在生产环境中使用,建议单独使用一台VPS来扶墙。
问3:为什么无法开启兼容模式?
答3:因为SSR服务端只支持部分协议的兼容设置,所以并非所有的协议插件都能兼容原版。具体列表参考 SSR协议插件稳文档
问4:脚本安装好连接上没有网络?
答4:请确认好您已经正确填写了加密方式、协议和混淆,并且使用最新的SSR客户端而不是SS客户端。
问5:脚本还是无法使用!
答5:如果可以输入 ssr 命令打开功能菜单,请选择 1 服务管理 再选择 4 查看日志。发送给我详细截图以解决问题。
问6:脚本是否支持 UDP 转发?
答6:默认是开启了 UDP转发的,如果无法使用,请检查SSR官方文档修改本地配置,SSR服务端默认安装在 /usr/local/shadowsocksr

参考资料

ShadowsocksR
ShadowsocksR-manyuser mudbjson
写的很简单,求大佬们勿喷~

          By: Griobhtha   
KDE "RC"... I don't think so. Overzealous evangelists can complain about whiners all they want, but that doesn't change the fact that KDE4 is still an ALPHA, NOT a beta, and certainly not a release candidate (are they going to "code freeze" the non-functional and missing components?). I use KDE in OpenSuSE or Kubuntu every day. I'm also required to deal with Ubuntu, Vista, and Leopard every day. KDE 4 is heading down the wrong road if the Plasma (read Aero) interface is going to become center stage over workability and functionality. Basic function calls don't work. Basic desktop management and customization is either missing or non-functional. How can you have something in beta without the MAJORITY of the NEEDED interface components existing and functional in some semblance for TESTING. KDE was different because it provided ready access to control(s). Control is seriously missing from KDE 4. It is too similar in non-functionality to Vista, Leopard, and even Gnome. I have nothing against a good looking desktop, but if it can't or won't provide control and functionality, it is no good. Again, if this is "beta" "release candidate", KDE is should change the name to Broken Knome.
          Blackberry como Modem en Kubuntu 9.10 usando Internet Ilimitado de EntelPCS   
El asunto es sencillo, en Chaimávida no existe posibilidad alguna de contratar un servicio de Internet decente, salvo aquellos de internet móvil que naveguen a velocidad 3G con capacidad limitada de descarga. Y a decir verdad ninguno de esos servicios me llaman la atención lo suficiente como para estar pagando otra conexión anexa a la que ya estoy pagando. Actualmente mi Blackberry 8300 posee (cuando pago la cuenta del teléfono) Internet Ilimitado contratado con Entel por $ 5.990 CL mensuales.

Es sabida la ausencia de soporte de parte de RIM para la plataforma Linux, pero como siempre ocurre hay gente dispuesta a hacer algo y ese algo es un programita llamado Barry, el cual dentro de sus funciones nos permitirá cargar la batería del Blackberry (versiones antiguas del kernel no dejaban cargarlo), sincronizar los contactos y lo más importante para este post ayudar a configurar el teléfono como Modem para conectar el computador a Internet.


Trataré de ir paso por paso con la finalidad de que a alguien le ayude este post, el cual me hubiese encantado encontrar cuando anduve buscando información en la red. Entonces sin más preámbulos los pasos son los siguientes:


1.- Instalar Barry: en versiones antiguas de Kubuntu (recuerdo la 8.10 y la 9.4) barry se encontraba en los repositorios, por ende con un simple comando de consola : "sudo apt-get install barry" el programa se instalaba completo. Sin embargo en la recién salida versión de Kubuntu (Kubuntu 9.10 Karmic Koala ), por ende existen dos formas de instalar la cuales detallo a continuación:

1.1.- Descargando los paquetes deb: El día que instale barry fue el mismo día en que salio la nueva versión de Kubuntu, por ende no encontré repositorios para la nueva distribución y como no quería desconfigurar la nueva instalación decidí instalar el programa y sus dependencias desde su página en sourceforge.

El listado de archivos a descargar es el siguiente:

libbarry-dev_0.16-0_ubuntu904_i386.deb
barry-util_0.16-0_ubuntu904_i386.deb
libbarry0_0.16-0_ubuntu904_i386.deb
opensync-plugin-barry-dbg_0.16-0_ubuntu904_i386.deb
opensync-plugin-barry_0.16-0_ubuntu904_i386.deb
barrybackup-gui-dbg_0.16-0_ubuntu904_i386.deb
barrybackup-gui_0.16-0_ubuntu904_i386.deb
barry-util-dbg_0.16-0_ubuntu904_i386.deb
libbarry0-dbg_0.16-0_ubuntu904_i386.deb


Los archivos son para la plataforma 386 y si bien los archivos son para la la versión 9.04 de Kubuntu funcionan perfectamente para la 9.10.

El asunto sería descargar e instalar en el mimo orden.

Si no quieren hacerlo de esta forma hagase de la siguiente:

1.2.- Desde repositorios: semanas despues de mi instalación encontré que ya existían los repositorios para Karmic, así que si quieren hacerlo de esa forma el repositorio se encuentra
aquí.


2.- Configurar los script de conexión : (info sacado mayormente desde este post)

Los dos script que hay que configurar se encuentran respectivamente en /etc/ppp/peers/ y el otro en /etc/chatscripts. Barry viene con script preconfigurados para compañias telefónicas gringas y europeas, asi que tomaremos uno de esos archivos y los modificaremos con la información de conexión de Entel.

2.1.- Modificar el archivo barry-tmobileus que se encuentra en /etc/ppp/peers y copiar el siguiente código


Código:

#
# This file contains options for T-Mobile US Blackberries
#
# It is based on a file reported to work, but edited for Barry.
#

connect "/usr/sbin/chat -f /etc/chatscripts/barry-entelpcs.chat"

# You may not need to auth. If you do, use your user/pass from www.t-mobile.com.
#noauth
user "entelpcs"
password "entelpcs"

defaultroute
usepeerdns

noipdefault
nodetach
novj
noaccomp
nocrtscts
nopcomp
nomagic

#nomultilink
ipcp-restart 7
ipcp-accept-local
ipcp-accept-remote

# added so not to disconnect after a few minutes
lcp-echo-interval 0
lcp-echo-failure 999

mtu 1492
debug
debug debug debug

pty "/usr/sbin/pppob -l /etc/ppp/peers/error -v"

# 921600 Works For Me (TM) but won't "speed up" your connection.
# 115200 also works.
115200
local

Guardar como barry-entelpcs y salir.

2.2.- Crear el archivo barry-entelpcs.chat en /etc/chatscripts/ con el siguiente código:


Código:

ABORT BUSY ABORT 'NO CARRIER' ABORT VOICE ABORT 'NO DIALTONE' ABORT 'NO DIAL TONE' ABORT 'NO ANSWER' ABORT DELAYED ABORT ERROR
SAY "Initializing\n"
'' ATZ
OK AT+CGDCONT=1,"IP","imovil.entelpcs.cl"
OK-AT-OK ATDT*99#
CONNECT \d\c

En esta parte hay que poner ojo ya que si la configuración a Internet es con bam.entelpcs.cl hay que hacer el respectivo cambio en el archivo.

Nuevamente Guardar como barry-entelpcs.chat y Salir

3.- Conectar el equipo

Ahora conectar al equipo, cuando pregunte si quieres ser usado como unidad de almacenamiento masivo poner que no. Abrir la consola y tipear en ella : "sudo pppd call barry-entelpcs" (sin comillas) ingresar la clave de root y esperar que el script conecte y voilá!! ya se puede navegar usando la blackberry como modem.


Observaciones:

A mi el equipo se me desconecta automáticamente cuando me llaman, no así cuando me llegan correos o mensajes. Para solucionar esto hay desconectar el script en la consola con Control + Z , luego reseetear la conexión de la berry con el comando "breset" y luego ejecutar nuevamente el comando
"sudo pppd call barry-entelpcs". Con esto la conexión queda nuevamente reestablecida.

Konqueror en modo navegador web anda de maravilla. Yo he estado haciendo scroobling a Lastfm con Amarok, hablando a través de Kopete y navegando en páginas livianas. Olvidense de cargar videos o bajar cosas muy pesadas, recuerden que el teléfono navega a traves de la red EDGE que es bastante buena para el teléfono, pero no esta pensada como banda ancha, sin embargo cumple 100 % para sacar de apuro. Además tiene algo de romántico esperar un poco por la carga de páginas ...como volver a esa navegación de mediados de los noventa.

Si se va a usar Firefox recordar desmarcar la opción "Trabajar en Modo desconectado" en el Menú Archivo.

P.D : Este post esta redactado en Chaimávida, lugar donde se hace la exquisita Cerveza Artesanal Kurüko, aspi que de paso visite:

http://www.kuruko.cl


Publicado en blogger





          The Overnightscape #253 (8/3/05)   
Tonight's subjects include: Fractal pyramids, Ubuntu Linux, board games of war and conquest, Matcha Green Tea Blast, The Open CD, textmode, beverage review ("Kentucky Vintage Original Sour Mash Bourbon"), listener email ("CB from Chicago", "Eduardo from California", "Jessica from Florida (Concrete Angel)", "Warchieftan Jarl Hakkon Thunderbeard V"), more Jamba Juice annoyances, a little foot, Tacoma, and humanity towards others. Hosted by Frank Edward Nora. (30 minutes)
          The Overnightscape #286 (9/19/05)   
Tonight's subjects include: What's in the box?, royal transport, vegetarian lamb meat, The Overnightscape Wikipedia article (started by Steve from The Obtuse Angle podcast), Game Boy Micro, Nintendo World, The Audio Field Trip ("The Rickshaw Ride" - Part 1), three things to be afraid of, synchronicity report ("Real Time With Bill Maher - fortune cookies"), listener email ("Nida from Washington State"), opening the package Nida sent, Ubuntu, Kubuntu, and The Lost Audio Field Trip. Hosted by Frank Edward Nora. (30 minutes)
          The Overnightscape #408 (3/8/06)   
Tonight's subjects include: Squash at Grand Central, Pong variants, anarchism, ice pellets, Concorde, Intrepid, listener mail ("Andy from Oakland, California - listening to all the episodes - energy gels"), Silicon Valley, underground economy, Ubuntu Lite, energy gel review ("Clif Shot - Cola Buzzzz with Caffeine"), mini music review ("Buzz Buzz Buzzzzzz"), Skype, drugs, Litter Leash, Loompanics, locks and locksmithing, battery change, outlaw history, and smoking Cuban cigars on the loading dock. Hosted by Frank Edward Nora. (30 minutes)
          The Overnightscape #470 (6/2/06)   
Tonight's subjects include: Egg shards, numerology, making stuff up, Black Sesame Pocky, commentary on the "Anything But Monday" episodes, Senator Lloyd Bentsen, guy who owns the moon update, melting chocolate, Frank on Madpod and Techy2, Japanese candy/toy review ("Choco Egg - Honda Collection, airplanes, trains", "ChocoQ Animatales"), Atlanta, Hate The Radio, parrot, topiary, tank, missing axle, 06-06-06, Podcasters moving into Second Life, Curry Castle, Rumor Girls Penthouse, Podcast Island, tiny models, Ubuntu 6.06 "The Dapper Drake", and a possible farewell to the original Overnightscape headphones. Hosted by Frank Edward Nora. (30 minutes)
          Change URL Browser for Document Viewer in Ubuntu   

How to change the URL browser for document viewer in Ubuntu. The document viewer is known as Evince. Changing this is simple.

The post Change URL Browser for Document Viewer in Ubuntu appeared first on Tom Ordonez.


          How To Install VirtualBox on Ubuntu   

Follow these simple steps to install Virtualbox on Ubuntu using the command line.

The post How To Install VirtualBox on Ubuntu appeared first on Tom Ordonez.


          How to Install Rust on Ubuntu   

A short tutorial to install Rust on Ubuntu. Rust runs at warp speed and guarantees thread safety. Rust uses a package manager called Cargo.

The post How to Install Rust on Ubuntu appeared first on Tom Ordonez.


          How To Install Scala on Ubuntu   

A simple guide to install Scala on Ubuntu. Scala is object oriented and supports functional programming. Scala is used by Twitter, Foursquare, Coursera,...

The post How To Install Scala on Ubuntu appeared first on Tom Ordonez.


          Install xmllint on Ubuntu   

Install xmllint on Ubuntu

The post Install xmllint on Ubuntu appeared first on Tom Ordonez.


          Disable Touchscreen on Ubuntu   

If you are not in love of laptop touchscreens. Disable touchscreen on Ubuntu. There are so many touchscreens in your life, why do you need another one?

The post Disable Touchscreen on Ubuntu appeared first on Tom Ordonez.


          Ubuntu Wifi Network Disconnected After Sleep   

I was happy enjoying my hack night. Closed the laptop. Came back to my Ubuntu: Wifi Network Disconnected. It took some witchcraft to solve this issue...

The post Ubuntu Wifi Network Disconnected After Sleep appeared first on Tom Ordonez.


          Comment on Best GNOME Shell Themes For Ubuntu 14.04 by TomB   
These are icon themes...
          La casa del misterio (grandes, fuertes, héroes)   

A veces pienso que estamos tan cansados de ser engañados, de descubrir que el camino de la sumisión siempre es el más cómodo, que no miramos a los estantes de arriba o de abajo en el supermercado y aunque sabemos que los productos a la altura de los ojos son los que tienen trampa. Caemos



Ha pasado mucho tiempo desde la casa del misterio por eso tú deberías saber que ya no puedes hacerme daño. He descubierto tu secreto: extenderte por el mundo. Cuando se acabe la mentira responderás con el silencio. Qué difícil darse cuenta en qué te estás convirtiendo. Otra vez diste la vuelta,¿para qué hablas conmigo? ¿para qué hablas conmigo? Íbamos a ser grandes, fuertes, héroes. El arañazo de un recuerdo escuece todo el tiempo, tráfico en la ciudad. Pensando en cosas que quizá tuve me ha pitado un BMW ¿Qué te pasa a ti chaval? Como ves sobre la mesa se deslizan todas las piezas ¿Dónde iba el miedo y dónde la verdad?. Destruido este mundo de los restos que ahora ves construiré el mío. Construiré el mío, Construiré el mío. Una razón, un dios, un credo miran por el agujero, yo me vuelvo a desnudar. Puede que el rumor sea cierto, 24h abierto, te los envuelven para llevar. Puede que el ruido de un insecto te despierte de un gran sueño, volveré a dormir y construiré el mío. Construiré el mío. Se abre el cielo ya. Lo importante habita en otro lugar esperando que todo se haga más pequeño. Ha pasado tanto tiempo. Ha pasado tanto tiempo. Íbamos a ser grandes, fuertes, héroes. Grandes, fuertes, héroes. Grandes, fuertes, héroes. Ha pasado tanto tiempo, y seréis grandes, fuertes, héroes

          Strange Wi-Fi and overheating issues with Linux kernel 4.x (And how to fix it the easy way)   
A while ago I wrote a post on fixing monitor resolutions for my new laptop when booting into Linux. As part of the troubleshooting I upgraded my linux kernel from 3.x to 4.0.x. While this did nothing to fix the issue, I left it that way cause downgrading kernel versions without any reason seems silly at best. Within a day of upgrading it, I noticed issues sometimes when connecting to the wifi after laptop was woken up from sleep or rebooted. This was happening randomly and to make matters worse my laptop used to become unresponsive due to which I had to reboot it, sometimes between important tasks. The issue wasn't happening too frequently, which is why I took so long to finally look into fixing it.

I started with searching bugs for my wireless card 7260 and wifi driver . This post should give some more details about the issue. I found two commands, and when faced with the problem, I simple created/removed one of these files and re-enable networking to fix it. The commands were found on ubuntu forums, and look something like:

echo "options rtl8723be fwlps=N ips=N" | sudo tee /etc/modprobe.d/rtl8723be.conf
sudo sh -c 'echo "options iwlwifi 11n_disable=1" >> /etc/modprobe.d/iwlwifi.conf'

While the above commands did help reduce the frequency of the issue, the issue still persisted. Hence, out of options, I decided to upgrade my kernel to the latest version (4.8.13 at the time). And Voila! That fixed my wi-fi woes.  However, my laptop started to overheat frequently and hang so I had to reboot it, sometimes several time a day. That seemed far from ideal. I had fixed an issue only to face a slightly bigger and more pervasive issue.

Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298776] CPU2: Core temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298777] CPU3: Core temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298779] CPU1: Package temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298780] CPU0: Package temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298782] CPU3: Package temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298785] mce: [Hardware Error]: Machine check events logged
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298787] CPU2: Package temperature above threshold, cpu clock throttled (total events = 1)
Dec 22 00:06:47 payal-ThinkPad-X1-Carbon-3rd kernel: [20763.298789] mce: [Hardware Error]: Machine check events logged

I first thought of manually adjusting core frequencies to control the temperature when it crosses a certain threshold, but when I went looking for the list of available frequencies, I did not find the file /sys/devices/system/cpu/cpu(x)/cpufreq/scaling_available_frequencies (where x =>  0 .. n).

Unable to figure out a way to get this list, I reached out to my colleague for help. He suggested that my driver (intel_pstate) might be what is hiding (I believe abstracting is the better term but we gave away with political correctness in 2016 so well.) the available frequencies to prevent the user do the exact sort of thing I was trying to do.

This made me look closer into the intel_pstate driver. I got to know this driver is what's available by default on most newer intel machines, and it does in fact abstract said information. And when I read a little further than just the introduction, I found the turbo boost option for this driver. Going by the information given here, the turbo option for the driver is more sensitive than it needs to be. Once I turned this to on, that is, do not use turbo, my overheating issue also went away. So now my Carbon is free of heating and wi-fi issues along with running the most up to date kernel version. Victory declared!

More information about CPU throttling can be found here.


          Resolutions   
Got a new laptop 2 days ago - a Lenovo Thinkpad Carbon X1 3rd gen. Looks good, feels good. But there's a problem - it's a bit too good. I'm not used to it, and neither is my dear Linux.

The laptop came with Windows 10 installed. I got 16GB of RAM so I could set up a Linux Mint VM with Virtualbox with Win 10 as a host. I ran into issues. I'm listing them out in case someone else is desperately searching online for solutions, and also so that the readers can have a good laugh at my expense:

1. I started by installing Virtualbox 5x and using a Linux Mint 17 iso file to create the VM. I noticed that under the 'OS Type' option, all I could see were 32-bit options. I shrugged and thought maybe it was a VirtualBox 5x thing and I'd be fine choosing Ubuntu 32 bit to run my Mint 64 bit VM. Yes, I'm stupid, but to be fair, I really didn't see any 64-bit options.

2. As one would expect, starting the VM didn't throw any explicit errors but the VM screen just went blank after flashing the Virtualbox icon. Didn't take long to realize that the 32 vs 64 bit issue was causing this.

3. Turns out Window 10 comes with Virtualization options disabled by default. To turn them on, I went in the BIOS by restarting the machine and hitting Enter. Once in, I navigated to Security>Virtualization and enabled the two virtualization options (Intel-VTx and Virtualization Extensions). I'll add more details later.

4. Rebooted and the OS dropdown in Virtualbox listed all 64-bit OS version this time. Yay! Chose one and restarted VM. It came up fine, but just one issue - one I couldn't overlook - the resolution was way too high. Windows 10 is optimized for this high res screen, but unfortunately Linux Mint isn't. "Well, lets try Ubuntu 14.04" I thought, and created another VM. Same issue. Resolution way too high.

5. At this point I played around with resolutions a bit - chose different options in VirtualBox Guest, but every other option only made the guest OS screen smaller. I didn't want that, no one would.

6. So now I decided I'd try dual booting Linux. The first thing to do for that is to turn UEFI Secure Boot off by going into boot option at restart and finding this option under the Security tab.

7. Once disabled, I burnt the Mint disc image onto a 2GB USB with Universal USB Installer. Connected it to the laptop and restarted, hit F12 and entered the boot option page. Chose the USB option but it would just keep loading back the boot option screen.

8. Thought that maybe, just maybe it was a problem with the pendrive and burnt the image again to my external HDD. But againt, the laptop refused to boot from it.

9. Unfortunately, this being an ultrabook doesn't have a CD/DVD drive. Requested my colleague to get his USB DVD drive the next day, hence bringing Day 1's struggles to a halt.

10. This morning, got the USB DVD reader and continued with my dual boot trial. Good news was that it booted linux mint trial just fine. But there was no wireless detection. None. Nada. Nil.

11. Looked around online to see if others had this issue, and they did. In fact there's a video on youtube that shows how to enable a supposedly disabled driver for wireless in Linux Mint.

12. Optimistically, I did exactly what the video said - search for driver manager, clicked on it and waited till it appeared just as shown int he video. Only on my screen, the window that appeared was blank and there was a pop up saying that I should install Mint first before making any changes. For those of you who have installed Mint in the past, having an internet connection is recommended. I don't know why its recommended, but I didn't want to take a chance, especially since I had no windows backup. Oh wait, I should explain why I didn't make a windows backup first.

13. According to my colleague, lenovo thinkpads have a "Rescue and Recovery utility" that puts the OS image on a DVD. Only in this Carbon X1 3rd Gen, there is no such utility. The only recovery option there is is to allow windows to backup everything, for which it asked for 43GB of space. Now there's no way a DVD disc would suffice. And from my experience the day before, a burnt image on a USD HDD was not being recognized by the laptop at all. So yeah, I was so frustrated by this time that I threw caution to the wind and proceeded with dual boot anyway.

14. Coming back to dual boot, I needed an internet connection. After some searching, found an ethernet wire and a port that worked. Then proceeded with the install. It asked me whether it should install Mint on the whole disc, and I chose 'Do something else'. Once I chose that, it brought me to a page where I was expected to choose the discs manually. This didn't scare me too much, until...

15. I noticed that the amount of free space was only 1 MB. You read it right... 1 MB. Turns out all the space was being occupied by Windows C drive. Rebooted into Windows 10 and chose the 'Disk Management' setting. Once in, right clicking the Window C: showed an option 'Shrink Volume'. I clicked it and for once, something was done for me automatically - Windows determined that it could shrink the C drive to exactly half of what it was currently - 512GB to 250GB. Woot!.

16. After Making 250GB available, I rebooted to Mint and on the partition page, saw that 'Free Space' now had exactly that - 250GB. Happily, I followed this awesome guide to create / and /HOME and swap partitions. Once done, I briefly looked into 'Device for bootloader installation' option and made sure I didn't choose something that'd overwrite the Windows loader. After some Googling, I was certain that the default internal SSD option /dev/sda was ok to proceed with. With this, my dual boot voes ended. But this wasn't the end of all my problems.

17. Once I rebooted to Linux Mint, I noticed 3 things:

  • The resolution was abysmally high
  • Still no wireless detection
  • the keys atop the touchpad were just scrolling up and down a line when pressed, not actually clicking anything.
18. my Linux kernel version was 3.13, and according to Linux Mint and Ubuntu forums, the Intel Wireless 7265 card wasn't supported. So to make it work, I would have to upgrade the kernel. Followed this tutorial to upgrade the kernel to 3.14. Unbelievably, the steps all worked on the first try. 

19. After rebooting post upgrade, I went online and downloaded the Wireless 7265 driver for the kernel from this page and copied the *.ucode file to /lib/firmware with sudo. And Voila! Wireless started working. 

20. Still ignoring the resolution issue which was the cause of all this to begin with, my stupid brain decided to resolve the mouse/touchpad buttons issue first. After again reading through forums, it appeared that this bug was reported sometime in April 2015, and even though there was a temporary workaround ( echo "options psmouse proto=imps" > /etc/modprobe.d/psmouse.conf) that involved Synaptics touch to be disabled (no finger scrolling, zooming etc.), the buttons started working but the touchpad was essentially jumpy and useless. 

21. After I undid the change by removing the  options psmouse proto=imps  line from the config file, I rebooted again and decided it was already time for yet another kernel upgrade to 4.0. Post 4.0 upgrade, the touchpad and mouse issues were fixed and I only had to reinstall the Wireless Card 7265 driver. 

22. Finally, all my issues except the resolution were resolved. Even with Linux as base OS, changing resolution seemed to only show a smaller screen. At this point, I emailed the salesman I dealt with for purchase my laptop stating "Both the laptop and windows 10 are too new to be supported by open source software critical to my work and study". As a last resort, and because I knew I was exhausted and not thinking well, I asked my colleague if this was the normal behavior upon changing screen resolutions. He suggested I try a particular option 1920 x 1080. And lo and behold... everything was MUCH BETTER. Still not perfect, but very usable. I asked how he knew that particular resolution would work, and he said it was the standard resolution for most screen. At that point it dawned on me the number of times I'd seen this particular resolution everywhere. Once would think I'd know to choose this option when looking at resolutions, but I just didn't realize. 

And so, here I am typing out this blog post on my shiny new Thinkpad ultrabook with Linux Mint 17. Go ahead and laugh. I'm laughing too. :D

          Installing Postgres from Source on Linux   
This page is an excellent guide to install postgres from source on any linux distro. Though useful, there were certain packages I found missing during './configure' step. Also, I had issues when I tried installing postgres with the perl module. Solutions to these issues are as follows:

Missing Packages: 

On some of the newer versions of Ubuntu and Debian 6.0.x, you will get an error on the './configure' step, complaining the missing packages 'readline' and 'libz'. Install these packages as follows:
  • sudo apt-get install zlib5-dev
  • sudo apt-get install libreadline-dev

Error on Make Command after './configure --with-perl':

If configuring with perl gives an error of a missing libperl.so, even though you have perl installed on the system, it is because the path to this library file cannot be found. The solution is to create a symbolic link as follows:
  • ln -s <original directory> <linked location>
Where, the original directory can be found with the command 'locate libperl.so' and should most likely be in /usr/lib/... and the location where you will require to create the link would be /usr/lib/perl/5.x/CORE/libperl.so

Once you do this, repeat the 'make' or 'make world' command, and it should run successfully. 

          Windows 10 BASHing   
It's a play on term, bash, because I'm trying out Window 10's Ubuntu BASH subsystem. Get it? Geddit? Anyway. I'm trying out some new development on Windows 10 and I wanted to write up some of my experience here.
          Ubuntu 12.04 y su nombre….   
Ya como nos tiene acostumbrado Mark Shuttleworth, no se hizo esperar el anuncio en su blog que revela el nuevo nombre para la próxima versión de Ubuntu 12.04 LTS. Para esta versión se escogió el nombre Precise Pangolin. Pangolin es un mamifero el cual posee una coraza de escamas en su lomo y escogió este animal precisamenteSigue leyendo "Ubuntu 12.04 y su nombre…."
          Seven Os, un Ubuntu con el aspecto de Windows 7   
Muchas veces pasa que la gente no ve a GNU/Linux con buenos ojos porque piensa que es para Frikis o que es dificil o simplemente no estan “lindo” como Windows en cualquiera de sus versiones. Pues bien, existe algo que puede hacer que esas personas vean con otros ojos a GNU/Linux y eso es unaSigue leyendo "Seven Os, un Ubuntu con el aspecto de Windows 7"
          【Ubuntu】.debパッケージの依存関係を編集して無理矢理インストール。   
現行の Ubuntu 16.04LTS に、ちょっと昔のプリンタドライバをインストールしようとしたら「依存関係が満たされないのでインストールできない。libti…
          【Ubuntu】ウィルス対策。ClamTkが上手く動作しないときの対処あれこれ。   
巷では相変わらず、Windows に感染するウィルスが猛威を振るっているようです。 ほぼ Ubuntu しか使っていない自分にとっては関係のない話に聞こえるかも…
          【Ubuntu】"二度とこのメッセージを表示しない" ボタンを間違って押してしまったので復旧する。   
さて Ubuntu を使っていると、ネットワークへの接続 / 切断時に以下のようなダイアログが表示されます。 このダイアログのボディ部分をクリックするとメッセー…
          【Ubuntu】"VboxClient : the VirtualBox kernel service is not running. "   
VirtualBox を 5.0 から 5.1 にアップグレードしてみた。 ダウンロードページ うちのホストマシンは Ubuntu 16.04.1。なのでアップ…
          Answer by LeonardoDaVinci for

Wie aktiviert man Browser Caching auf dem Apache Server damit Bilder, CSS und JavaScript Dateien bei einem wiederholten Besuch nicht nochmal heruntergeladen werden müssen und somit die Webseite schneller geladen werden kann und die Server entlastet werden?

Das wird auch von Google PageSpeed empfohlen, wenn man damit die Performance der Webseite testet.

   
- Aktiviere auf deinem Server das **[Expires Modul](http://httpd.apache.org/docs/current/mod/mod_expires.html)**, das macht man bei Ubuntu mit dem Befehl: sudo a2enmod expires - Starte den Server danach neu: service apache2 restart - Trage in deine Virtualhost Datei folgenden Eintrag ein, der sollte zwischen **``** und **``** stehen: ExpiresActive On ExpiresDefault " modification plus 1 week" - Du kannst selbst Dateiendungen aussuchen, die im Speicher des Browser für bestimmte Zeit gespeichert werden soll, wenn du beispielsweise png nich cachen möchtest, dann ersetze **`jpg|jpeg|png|gif|js|css`** durch **`jpg|jpeg|gif|js|css`** usw. - Im Beispiel ist die Caching-Dauer auf 1 Woche gesetzt: **`ExpiresDefault " modification plus 1 week"`** - Du kannst die aber auf deine Bedürfnisse anpassen: - Z.B.: 6 Stunden und 7 Minuten: **`modification plus 6 hours 7 minutes`** - Z.B.: 1 Monat 21 Tage und 9 Stunden: **`modification plus 1 month 21 days 9 hours`** - **`modification plus 1 week`** bedeutet, dass eine jpeg Datei, die z.B. am 02.07.2013 um 17:00 auf dem Server geändert wurde, für eine Woche cachebar ist. Das heißt, dass wenn ein Besucher am 3.07.2013 die Seite besucht, wird diese jpeg Datei gecached und wenn er am 5.07.2013 wiederkommt, wird die Datei nicht mehr von der Webseite heruntergeladen, sondern aus dem Browsercache geladen. Wenn derselbe Besucher nun am 15.07.2013 wiederkommt, dann ist die Woche abgelaufen und die Datei wird von der Webseite heruntergeladen. - Man kann statt **`modification plus 1 week`** auch **`access plus 1 week`** benutzen. In diesem Fall sorgt **`access`** dafür, dass der Browser des Besuchers die Dateien für genau eine Woche seit seinem **ersten** Besuch cached, wobei nicht wichtig ist wie oft man die Datei auf dem Server aktualisiert. - Starte den Server danach neu: service apache2 restart Weitere interessante Artikel: - [phpgangsta.de](http://www.phpgangsta.de/expires-header-und-komprimierung-aktivieren-im-apache2) - [blog.splash.de](http://blog.splash.de/2010/01/29/cache-kontrolle-beim-apache-via-htaccess/)
          RE[4]: Yay?   
Won't be that hard... Unless it changed recently, many media players like Rhythmbox and Totem didn't. Neither Quod Libet, but I guess it doesn't count. Same thing for EOG or the other image viewer included in Ubuntu supporting slideshows (don't remember its name and I'm not in front of a GNOME system) didn't either. That said, the new Gedit looks nice. , heh, I'm doing the latter right not... It's not that bad, but copy/paste between files is a pain. The little editor is getting a bit fat, but as long as it's useful, I won't complaint...
          Unable to compress file with password (7z encryption)   
Hello Internet, When I used Ubuntu, I was able to right click a folder, select compress, and turn a folder into a compressed archive with a password. I'm liking Fedora so far, but I cannot encrypt files with a password. I am running Fedora 26 workstation (GNOME) and I have p7zip and p7zip-plugins installed. This is what I see: [Document Name] .zip .tar.xz .7z These are no options to add a password, and no options to select additional file extensions. Thanks for your help, I will try installing Cinnamon tomorrow in the hopes that a more feature-rich desktop environment will have this ability built in.
          Автор: Darkest   
[QUOTE]Люди, просветите невежду! за счет чего они финансируются?[/QUOTE] [QUOTE] Не все свободное ПО - бесплатно. Плюс плата за использование конкретного ПО в сборках, плата за техподдержку, за внесение изменений в программы, за написание новых программ. Вариантов масса. Те же туллбары Гугла и Яндекса, например...[/QUOTE] Это всё различные варианты, но Ubuntu финансирует этот самый основатель, который вроде как богатый. Разумеется, какую-то дополнительную денюшку они имеют.. [QUOTE]Убунту плодится почти как Хром, вот только глобального в этих обновах к сожалению кот наплакал...[/QUOTE] О том, что заметно, они рассказывают, или ты хочешь, чтобы они расписывали какие новые версии каких компонентов присутствуют в новой версии системы и чем конкретно эти компоненты отличаются от старых? Разумеется, этим никто не будет заниматься. Они распространяют дистрибутив, то есть они занимаются тем, что собирают воедино всё что наделали полезного под линукс. Если интересны изменения на более низком уровне, читай подробности...не здесь..
          Автор: Bob Beler   
[quote] @[b]TJ[/b]: [/quote] Поддерживаю облака только для игр (OnLive) и, может в дальнейшем, для запуска тяжеловесных прог на слабых компах. А персональные данные не вижу смысла хранить  где-то не у себя [img]http://itc.ua/sites/all/libraries/tiny_mce/plugins/emotions/img/smiley-laughing.gif[/img] Если такого же мнения придерживается большинство, то Ubuntu будет жить[img]http://itc.ua/sites/all/libraries/tiny_mce/plugins/emotions/img/smiley-laughing.gif[/img]
          Comentario en Actualizar UBUNTU por consola por Tenorio   
Muchas gracias, me fue de mucha ayuda!!!!
          Comentario en Actualizar UBUNTU por consola por Luis   
Muchas gracias me sirvió mucho. Saludos desde Ecuador
          Docker Image for Azure CLI and Azure Powershell   
If you are a MAC or Linux user like I am and you like to manage your azure environment with azure-cli or azure-powershell you can use the following docker Image: cvugrinec/ubuntu-azure-powershellandcli:latest Just type the following: docker run –it  cvugrinec/ubuntu-azure-powershellandcli /bin/sh Please note that for azure-powershell only ARM mode is supported. Azure cli supports ASM and … Continue reading Docker Image for Azure CLI and Azure Powershell
          How to Create a CIPA Compliant Content Filter Using Free Software.   
In this post I will describe how to use free open source software to create a simple CIPA compliant web filter that can be used in a small to medium public library.  I am using Ubuntu server 10.04 with Squid, … Continue reading
          What I did to make the Everex ready for patrons   
The first thing I did was download and burn Ubuntu 8.04 Hardy Herron to a disk.  This was eventually done on a “gasp” Windows machine, since the gOS burning software was somehow not able to burn a readable .iso.  So… … Continue reading
          Everex pc2 first impressions   
The everex pc arrived today and the reviews of gos being a total piece of crap os are true.  I am sure that anyone with tech skills would put on a Distro like Ubuntu, and those without tech skills (like … Continue reading
          Ubuntu MATE si que utilizará MIR y no Wayland en sus futuras versiones   

Entre los proyectos que Ubuntu dejaba de lado estaba MIR, el famoso servidor gráfico que pretendía sustituir a X.Org y...

El artículo Ubuntu MATE si que utilizará MIR y no Wayland en sus futuras versiones ha sido originalmente publicado en Linux Adictos.


          Βίντεο Καραγκιοζιλίκια LIVE στον ΣΚΑΪ Ρεπόρτερ - ματάκιας"καμακώνει" περαστική κοπέλα την ώρα του ζωντανού ρεπορτάζ για τα σκουπίδια    
Προσπαθούν να κάνουν σοβαρό ρεπορτάζ στον ΣΚΑΪ, αλλά δεν μπορούν τα παλικάρια.

Ο δημοσιογράφος του ΣΚΑΪ, αντιμετώπισε με τόση μεγάλη σοβαρότητα το θέμα με τα σκουπίδια, που σταμάτησε τη μετάδοση του ρεπορτάζ για να χαλβαδιάσει μία περαστική κοπέλα. Τον κακομοίρη τον ρεπόρτερ, ας τον στείλουν διακοπές.
Δείτε το ΒΙΝΤΕΟ και τα συμπεράσματα εναπόκεινται στην κρίση του καθενός. 

          Comments: Карта памяти SMARE на 128Гб   
Пользуетесь Linux и задаете такой простой вопрос. Я помню ставил Ubuntu, в ней были встроены свои инструменты для замера скорости.
Читать далее
          Comments: Карта памяти SMARE на 128Гб   
> Вопрос на «засыпку»: кто-то знает прогу для тестирования на ту же реальность обьема и тестирование скорости под Linux?

# dd if=/dev/sdX of=/dev/null bs=1M conv=fdatasync
# hdparm -t /dev/sdb

$ apt-cache search hdd test
diskscan — scan HDD/SSD for bad or near failure sectors

Ещё:
fio
iozone
bonnie
dd
seeker

(http://ubuntuforums.org/showthread.php?t=1155245&p=12371796#post12371796)
Читать далее
          The AnandTech Podcast: Episode 13   
Anand Shimpi, Brian Klug & Dr. Ian Cutress discuss their best products of 2012. Haswell and Valley View are up for discussion as well. Brian goes on a rant about mobile operators removing fieldtest from smartphones. Brian also discusses Ubuntu for smartphones, the new Go Pro Hero 3 Black and some final words on the Galaxy Camera. The trio talks a bit about the ARM vx x86 power articles as well.
          Ubuntu MATE si que utilizará MIR y no Wayland en sus futuras versiones   
Entre los proyectos que Ubuntu dejaba de lado estaba MIR, el famoso servidor gráfico que pretendía s
          Linux UBUNTU – Marvell 88w8335 802.11b/g Wireless (rev 03)   
Taper dans un terminal lspci | grep Ethernet Si vous obtenez un retour avec ceci qui apparait : Ethernet controller: Marvell Technology Group Ltd. 88w8335 [Libertas] 802.11b/g Wireless (rev 03) C’est que ce tuto est pour vous. Voici un Plan pour mener à terme l’installation. La marche à suivre dépend si vous avez une connexion ou pas. Outils nécessaires à …
          Adiós a Boxee para el PC   
Como señalan nuestros compañeros de MuyComputer, si queréis seguir poder usando Boxee en vuestros PCs o Mac, será mejor que os deis prisa: la empresa está comenzando a eliminar las descargas de su cliente para Windows, Mac OS X y Linux (Ubuntu 32 y[...]
          Dragonfire: Un asistente virtual para Ubuntu   

Aunque a muchos nos cueste, debemos abrirle los brazos a la inteligencia artificial y comenzar a adquirir herramientas que estén equipadas con estas tecnología. En el mundo del software libre los avances de la inteligencia artificial son numerosos, en esta ocasión queremos dar a conocer un asistente virtual para Ubuntu llamado Dragonfire que busca hacerse un […]

El artículo Dragonfire: Un asistente virtual para Ubuntu aparece primero en Dragonfire: Un asistente virtual para Ubuntu.


          A new Linux desktop   

I'm taking some time off this week and there is nothing like starting out by wasting a few hours on your computer system. I was kind of bored with my current Linux setup (SUSE 10) and wanted to switch from KDE to GNOME anyway. So instead of going to SUSE 10.1 I decided to try something new. I have to mention that this Linux system is not my primary work machine. I have been using a Mac PowerBook as my main computer for the last year and a half, and i have been getting spoiled with things that just work. I don't think I would be willing to give up the seamless wireless connectivity, the easy multi display setup (including projectors) and the near perfect performance of sleep/resume that the PowerBook has been providing. So, for a laptop I'll continue to use a Mac. The system I am reconfiguring this time is my desktop that I primarily use for Java development and to run an Oracle XE database.

The distros that drew my attention were the latest Ubuntu and SUSE Enterprise offerings. I've tried both brands before and decided to compare the ease of use and the out of the box experience this time around. I'm getting older and more lazy, and am not willing to spend hours configuring my Linux systems anymore. They either work out of the box or they don't. I will spend 15-20 minutes searching the web for some answers, but after that I tend to give up and try a different distro. I'm also not unwilling to pay a modest fee for the latest SUSE enterprise system. To pay $50 for a year seems well worth it if it provides a superior ease of use and less time fiddling with configuration files.

Installation was easy enough for both versions and afterwards I installed the "commercial" Nvidia drivers for both systems without any trouble - so far so good. Next I will try a few common tasks and compare the ease of use for both systems. I'm not trying to figure out if something is possible to get working, I'm more looking for if it is easy enough for me to configure or use the feature within 15-20 minutes.

Appearance wise there is not much difference - one is blue and the other a brownish orange. Both use a current GNOME desktop and they include the usual suspects like Open Office 2.0 and Firefox 1.5. SUSE's main menu system is a bit different with the main menu including Search, System, Status and Application launch areas, but it seemed pretty easy to use. Ubuntu's menus are traditionally clean without a lot of applications (that I would never use anyway) cluttering up things. I really like that approach. If I need something oddball I'll use the command line or add it manually.

Here are the things I tried and my notes.


Ubuntu 6.06 LTS Desktop

SUSE Linux Enterprise Desktop 10 (SLED)

Connect a USB stick and copy files and then unmount

YES - no problems here.

YES - unlike some previous SUSE versions I did not have any problems this time. Have had some issues unmounting without being root before.

Browse a site using Macromedia Flash (only provided in 32-bit)

MAYBE - you have to click on the link that searches for the plug-in, download it and install it manually. Now, on a 64-bit system you can't install the 32-bit flash player. Saw some posts that you could replace Firefox with the 32-bit version and use the 32-bit plugin. I didn't try that.

YES - the Flash player plug-in is pre-installed by default. This was also true for the 64-bit system. Not sure how they accomplished this since I have not been able to locate a 64-bit flash plugin. But it works.

Connect and play mp3's from my iPod

OK - Rhythmbox refused to play any mp3 files. Installed all gstereamer-plugins that I could find but still got an error message that the MIME type of the file could not be identified. Also there is a confusing array of software involved -- Music Player, Rhythmbox, gstreamer, serpentine and Sound Juicer.

After experiencing the ease of use of Banshee on the SLED system I installed it on the Ubuntu system as well and did manage to get it to work. Now, of course, I can also play the mp3 files with Music Player - go figure.

Have to mention that the Ubuntu desktop icon actually looks like an iPod while the SLED icon looks like a generic mp3 player. I like the iPod icon better.

YES - easy. Banshee detected the mp3 files automatically and played them

Also, I can play the Apple non-DRM AAC music files in SLED, but couldn't figure out how to add that to Ubuntu. I believe its part of a "non-free" Helix package that is included with SLED. Not a big deal, I can live with using mp3 files. I would actually prefer to use ogg, but that is not directly supported by iTunes or by the iPod itself.

Hook up a digital camera and import photos

OK - works with JPG but not CRW files. Camera detected and able to import picture but the gThumb application that pops up to do the import is not able to handle Canon's RAW format it seems. You can install F-Spot to get the same functionality as SLED though.

YES - works with both JPG and CRW files. The F-Spot application that automatically pops up looks fairly nice too - similar to iPhoto.

Connect to my HP Deskjet 6840 and print a picture

YES - selected Network Printer - HP JetDirect and provided the IP address - test page and photos printed without trouble

YES - same experience as for Ubuntu.

Install the latest Java 6 Beta 2 release

YES - download the non-RPM binary install and copy the directory to /usr/java. Add paths to .bashrc and it works fine.

YES - download the corresponding RPM, install it and add the paths to the bash profile.

Install Oracle 10g XE (only provided in 32-bit)

MAYBE - on a 32-bit system it's a very easy install - followed the directions from Oracle OTN.

On a 64-bit system I was not able to install the software since it is only provided in a 32-bit version. The dpkg utility refused to install saying that the package architecture (i386) didn't match the system (amd64) . Added the -force-architecture option and got the software installed. The database configuration step doesn't create the database though. This is most likely due to the fact that I couldn't find a 32-bit libaio package to install using Synaptic. Did not try to work around this any further since it was getting late. It should be possible though - see Valery's Mlog for more details.

YES - the rpm installer complained about libaio not being installed. Once that was taken care of the install went smoothly. (Note: on a 64-bit system make sure to install libaio-32bit since libaio by itself will satisfy the rpm check but it won't work with the Oracle 32-bit executables)

So if we look at the scorecard, SUSE Linux Enterprise Desktop has the ease-of-use edge over Ubuntu 6.06 LTS Desktop. Ubuntu would have worked well on a 32-bit system, once I figured out how to overcome the shortcomings with the media integration. On a 64-bit system though, I wouldn't be able to do everything I wanted to. So, for my 64-bit desktop machine it looks like SLED will save me considerable time, since everything worked out of the box. I do like the support provided on the Ubuntu forums and haven't found the same kind of community for SLED. But maybe I won't need it if everything just works.

Overall I'm impressed with both of these distros, but I have to choose one and this time SLED won out due to the fact that everything just worked. Let's see how long it lasts. I think I'll still keep a copy of Ubuntu running on an older 32-bit machine I have sitting around.




          How to Hardening your own program in GNU/Linux   
Platform: OpenSUSE 12.3

Apparmor is a implementation of confinement technology. It could help you prevent those unknown attacks like 0-day vulnerability. In OpenSUSE/Ubuntu, it's very easy to install it. For the case in openSUSE 12.3, type "yast2" in terminal or use GUI software management can install the apparmor. Once you install the apparmor, you need to make the profile for the program what you want to be hardened.

Firstly, please download the example files here. Then compile the program:

shawn@linux-sk8j:~> gcc apparmor_test.c

Generate the profile for your program:
shawn@linux-sk8j:~> sudo /usr/sbin/genprof a.out

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

.........................................
.........................................
.........................................

Finished generating profile for /home/shawn/a.out.
 -----------------------------------------------------------

Then you can find the profile in /etc/apparmor.d/home.shawn.a.out. Add a few of lines into it like this:

#include

/home/shawn/a.out {
#include

   /home/shawn/a.out mr,
   /home/shawn/hello r,
   /home/shawn/world w,
   network stream,
}

Because apparmor is using whitelist-like policy in default. The above example means: only allows this program( a.out) have the read permission on file /home/shawn/hello, the write permission on file /home/shawn/world and the tcp connection. If this program have a stack-based buffer overflow issue, the attacker might want to spawn the shell by exploit it. In this case, this not gonna be happened. For further reading about apparmor profile, you might be interested in this article. Other similar implementation like SELinux and Grsecurity/PaX could achieve the same goal. SELinux is the most powerful one but the most difficult to use.

When you done the confinment hardening, there are a lot of mitigation technology you should consider. It's much easier to use. Please keep this in mind: these defensive technology are what we called "mitigation", which means the skilled hackers or attackers having the ability to exploit it. It's only the matter of time.

GCC options:
------------------------------------------------
Stack canary:
-fstack-protector, only some functions being protected
-fstack-protector-all, protect every functions in your program

Bypass method, please check Scraps of notes on remote stack overflow exploitation in Phrack Issue 67.

Heap( malloc() corruption check):
default since glibc 2.5. Please use the latest version of glibc.

Position-Independent-Executable:
-pie, it would use the advantage of ASLR which provided by kernel. Remember turn on your ASLR:


Bypass method, please check Bypassing PaX ASLR protection in Phrack Issue 59. Yes, it's an old paper but it's still worth to read.

GOT memory corruption attack hardening of ELF binaries:
-z relro, Partial RELRO
-z relro -z now, Full RELRO

Bypass method, please check The Art Of ELF: Analysis and Exploitations

String Vulnerability mitigation:
-FORTIFY_SOURCE, mitigate string format vuln

Bypass method, please check A Eulogy for Format Strings in Phrack Issue 67.

Non-executable stack:
-z nostack

Well, there are a lot of ways to bypass it.

I also made a list a few months ago. You may want to check it too. Yes, there are a lot of mitigation tech and a lot of bypass tech. Offensive and defensive technologies are like brothers. The only matter is they will fight each other to the end of the world;-)

btw: You don't need to worry about the performance hit when you turn on these mitigation tech except -fstack-protector-all. That's it!

May L0rd's hacking spirit guide us!!!
          Vuln assessment for PALADIN forensic tools free version   
I went to the China Mac Forensic Conference last week. This was my 1st time I attended a security con about forensic. Some of security guys gave us a few free speech and it's all about forensic. In forenisc field, the only stuff I've know its Lynis which was written by Michael Boelen. They were talking about forensic stuff on Mac/iOS platforms in the morning. That made me a little boring. But what else can I blame about? This conference is called Mac-Forensic*. Fortunately, I found something very interesting at the afternoon. A company named SUMURI providing a forensic solution which based on GNU/Linux. This GNU/Linux distro is called "PALADIN". I got a free Live-DVD and booted it up in scene. Well, I was fuc* exicting because I got tens of shitloads of information about Mac/iOS in that day. Now I had something I'm familiar with: GNU/Linux. I found some potential risks for PALADIN GNU/Linux distro. I've already notified them. Hope they could spend more time on sec stuff.

OK. When PALADIN booted up, you can see the ubuntu-like( Unity?) GUI:



PALADIN provides a lot of open source forensic tools:

In the free version, the only closed-tool is "PALADIN Toolbox" which can be found in the Desktop and the binary file is located in /usr/bin/toolbox. This binary is using many free/open source libraries. The 1st potential issue is violation of free/open source licenses. Then I asked Steve Whalen "are you sure that toolbox has no violation of the free/open source licenses" in the scene. His answer is pretty sure that the toolbox won't be violated any free/open source licenses:

Then, I took a few mins to investigation on the binary. Firstly, the entry address: