Sunday, September 14, 2014

Typing is not a bottleneck

Some phrases are deep even when short

Wednesday, August 27, 2014

How to not minify already minified on each "grunt build" in yeoman

Long title, yeah.

Let's speed-up Grunt a little

In some of my projects I use yeoman (generator-angular), and on each "grunt build" Grunt runs minification of all JS files from the 'vendor' list (and from 'scripts' also, but it's not a problem).
It takes a long time, especially if you use a lot of external modules.
But most of them already have minified versions, so why to waste a time to minify them again?

I've made Grunt to skip minification of 'vendor' block and it decreased time of 'grunt build' execution in 5 times.

But it wasn't easy (that's why I decided to write this article).

You can't just ask usemin to skip 'vendor', but you can ignore configuration, made by usemin for uglify. To do this, add 'onlyScripts' target to your uglify task:

    uglify:        {
      onlyScripts: {
        files:   [{
          dest: '<%= yeoman.dist %>/scripts/scripts.js',
          src:  ['.tmp/concat/scripts/scripts.js']

Also, now uglify will not copy your vendor.js from temporary folder, so add "vendorJS" section to "copy" task:

      vendorJS: {
        expand: true,
        cwd:    '.tmp/concat/scripts/',
        dest:   '<%= yeoman.dist %>/scripts/',
        src:    'vendor.js'

Then, in "build" task, set target of uglify to 'onlyScripts' and copy vendor.js:

  grunt.registerTask('build', [
    // ...
   // ...

About commented out lines:
 "wiredep" - this task will replace all .min.js-links (in "vendors" block) to .js-links. We don't want to allow it (otherwise our vendor.js will be huge non-minified file).
 "concurrent:dist" - when this task is commented out, Grunt works much faster on MBP with 2 cores. I don't know why. Try it, maybe it will help you too.

Sunday, August 25, 2013

Automation of Grunt builds in WebStorm/PHPStorm

It's very handy to use Yeoman and Grunt to build AngularJS (and not only) applications. And I'll write here how make it even more comfortable in JetBrains IDE.

Default installation of Yeoman is very good for development on localhost. But it's not always possible - some databases and services just can't be on localhost, even their mocks. For example: ActiveDirectory - I don't even want to imagine, how many time I'll spend to make small copy of AD on my localhost.

For deployment I use beanstalk - all I need to deploy on development server is press "git commit and push" button in IDE (I use PHPStorm because in my current projects backend is in PHP).
But before that, I have to run "grunt build" to put all results to the "dist" folder.

There is 2 ways to automate grunt builds in PHPStorm/WebStorm:

1) File Watchers - can be used, when repository contains only 1 application (because of "working directory" argument in the File Watcher settings);
2) Rebuild manually by running external tool from context menu or executing "grunt build" in console;

To run grunt build in Windows console:

1) add path to nodejs folder and %APPDATA%\Roaming\npm\ in your system PATH variable
2) open console in IDE (shift+ctrl+x)
3) if necessary, set working directory by "cd dir_name"
4) write "grunt.cmd build"

To create context menu "Grunt build" in Mac OS:

Settings -> External Tools -> +
Name: build
Group: Grunt
Program: grunt
Parameters: build
Working directory: $FileDir$

To create context menu "Grunt build" in Windows:

Settings -> External Tools -> +
Name: build
Group: Grunt
Program: grunt.cmd
Parameters: build
Working directory: $FileDir$

Now Grunt will rebuild our project, but it's not yet enough - names of some files will be changed and we'll
need to add these files to git (run "git add .").

To do this, I use grunt-exec. Run in apps folder:
npm install grunt-exec
then add to Gruntfile.js (before last '}' for instance):
then create 'exec' task in grunt.initConfig:
exec: {
    command: 'git add .',
    cwd: '<%= yeoman.dist %>'
now, to run this task in each build, add this line in list of
section. It will look something like that:
  grunt.registerTask('build', [
That's all! :)

Wednesday, July 24, 2013

How to install Golang 1.1 in Debian from repository

I like to use packages in Debian - it's easy to update and more stable.
Version of Go language in Debian Wheezy is 1.0.2, but we have 1.1 version in unstable (sid) branch, so let's use it.

Add sources list

It's not safe to just add "unstable" into sources list, so I chose to add it in this way: , I'll quote text of Brendan Byrd here just to save it:

First, create the following files in /etc/apt/preferences.d:
Pin: release l=Debian-Security
Pin-Priority: 1000
Package: *
Pin: release a=stable
Pin-Priority: 995
Package: *
Package: *
Pin: release a=unstable
Pin-Priority: 50
Now, creating a matching set for /etc/apt/sources.list.d:
deb   stable/updates  main contrib non-free
deb  testing/updates main contrib non-free
deb    stable main contrib non-free
deb-src    stable main contrib non-free
unstable.list: Same as stable.list, except with unstable.


Then run apt-get update and, finally:
aptitude install golang/unstable


Check version:
go version

And check code execution like described in

Thursday, July 18, 2013

PHP, MSSQL, nvarchar (fetch and write UTF-8 with ODBC)

Today I found pretty annoying thing in PHP MSSQL ODBC driver (I use it via PDO).
This driver doesn't support nvarchar fields (or nvarchar(max) only, doesn't matter for me - I can't change database schema just because of using PHP).
After 2 hours of googling I found solution. It's hack, but it works fine with nvarchar, decimal, varchar, int and others.

To read 

I found it accidentally in this blog and I'm very thankful to David Walsh:
SELECT CAST(CAST([field] AS VARCHAR(8000)) AS TEXT) AS field FROM table

To write

It's from StackOverflow:
$value = 'ŽČŘĚÝÁÖ';
$value = iconv('UTF-8', 'UTF-16LE', $value); //convert into native encoding 
$value = bin2hex($value); //convert into hexadecimal
$query = 'INSERT INTO some_table (some_nvarchar_field)  VALUES(CONVERT(nvarchar(MAX), 0x'.$value.'))'; 

Wednesday, June 12, 2013

Google recommendations about multi-lingual sites are wrong

I think recommendations from Google Webmaster Tools - Multi-regional and multilingual sites are bad. I'm just one person, humble programmer, and I dare to think that I'm smarter than Google? Sounds too brave even for me, but in this case - yes.

Google recommends use sub-domains or query path part as definition of current language, so URL will look like or
And there is reasons why I think it's bad recommendations:

1) When user will share link, language will be hardcoded.

It's not obvious when your main language is English, but if user is, for example, from Italy and wants to share some URL from Wikipedia in his favorite social network, all users, from all countries, will open that page in foreign language and will be forced to search where is the language-switching button on that site. Try to find that button on Google Webmaster Tools site.

2) It's bad for semantic web.

Water is a water. In France, Italy, Germany and even in Soviet Russia, water is H2O. So by URL (it's example) I suppose to read about water. Will or point to different resource, not water? If not - why different URL?

3) Language is just an option of content representation, same as background color.

Main thing is the meaning of that content, not color or language or rounded borders. So use URL parameters, which you use for other options. It's not intuitive to use and - is it different APIs? Will their responses be different only in language or in some other things also? Time in response will be in PDT or in IST?

But it's Google recommendations, source of visitors...

Of course it's enough reason to follow these recommendations if you want to optimize your site as much as possible for free visitors from search engines. But I'm sure that sites should be for humans, not for search engines. And I hope some engineers from Google will read this (or similar) article and will change their recommendations.

Sunday, January 20, 2013

Couchbase support in Jamm!Memory

New storage

Jamm!Memory, PHP library with universal interface for best cachers (Redis, APC, Memcached), now have object for work with Couchbase - CouchbaseObject.
All features of this library are implemented in this class: tags, dog-pile and race-conditions protection, lock/acquire and others.
All code is test-covered, tested and even benchmarked. CouchbaseObject is not yet used in production, but I'm going to (see below).

Main feature of this storage is a fault-tolerance, of course.
While working on this code and testing it, I built cluster from few nodes and it was really simple. Server GUI is super user-friendly, has lot of monitoring things and control of nodes. Add new node in cluster is just  2 minutes (maybe less) plus time for rebalancing (it's going automatically and all data is available).

And next branch

Age of Jamm!Memory API is more than 2 years already. So I made branch 1.0, where currently placed all code of library, tested by unit-tests and by heavy usage in production for 2 years.
In next branch, 2.0, I will write refactored version of Jamm!Memory. API will be changed, all features will be saved, new features will be added. Some other storages maybe will be added (Riak, Cassandra).


I want to create another library, dedicated to Redis Cluster. It's in active development still, but it's time to write clients already :) It will be only key-value storage, but I hope it will as fast as existing Redis. I don't like that "Redis cluster sacrifices fault tolerance for consistence", but maybe it's just me and I don't understand something.

Also I hope Couchbase will have ability to turn off permanent writes to disk and will be able to copy data in background, so all writes and reads of data will be in memory. 
Please, support this idea by posting in forum thread.

In their forums I've created few themes and accidentally (I swear) whole column "last post" had become filled with my nickname :)

Also, I'm going to use Couchbase in production as a storage for cache, logs, sessions. This DB could be perfect for analytics (because of Map/Reduce), but it will require to write some GUI to show all these results. Currently used tool works only with RDBMS by ODBC.