Warning: The magic method RestrictCategories::__wakeup() must have public visibility in /var/www/html/blog/wp-content/plugins/restrict-categories/restrict-categories.php on line 59

Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/blog/wp-content/plugins/restrict-categories/restrict-categories.php:59) in /var/www/html/blog/wp-includes/feed-rss2.php on line 8
Page not found – mablog https://abendstille.at/blog the blog of ma Tue, 11 Sep 2018 17:22:02 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 18365574 ars electronica 2018 https://abendstille.at/blog/?p=204 https://abendstille.at/blog/?p=204#comments Tue, 11 Sep 2018 17:21:19 +0000 https://abendstille.at/blog/?p=204

error

i am not a constant ae (ars electronica) visitor, so this was my first visit since i guess 10 years or so? unfortunately i only had one day to spend so i decided to visit post city. i’ve seen a lot and tried some things, so here you go, some pictures and subjective comments 🙂

science, art, error and imperfection

“error” was written on the ae-teams t-shirt, “imperfect vegetables” and a marquee led-wall reading “error humanity” welcomed the visitor.

space art

day4 promised to be interesting for scientists. i visited the “Space Art – Trial and Error in Art&Science” conference.

Space Art Speakers

the speakers talked a lot about examples of space art – including the ArtSat project. imho they did not explain the aspect of trial and error, or i did not understand it.

u19 area

i really liked the u19 area where the “zusammen kommen lab” and the “future lab” were located. kids learned how to build and program robots (they had a robo-challenge which was very funny), sew, try virtual reality, control a fire truck with from brain EMG and much more.

another interesting project was Verner, a robot that looked – to me – like the mars rover. it was awesome, because it was built with cheap technology available to everybody. they even wrote a software to verify the steering-algorithm. impressive!

interesting exhibits

i was shocked by an artificial uterus, not because it could rescue babies born below 24 weeks, but because – in an improved version – it could possibly be used to “breed” humans or animals without any “real” father or mother.. welcome thx 1138!

the archae bot showed a concept robot that could survive singularity and climate change. to me it showed that what humans often describe as the “meaning of life” can be reduced to computation of information – the archae bot does nothing else and therefore does not need any limbs. it communicates wireless and does not need a mouth. it consumes electricity and could potentially charge itself and does not need food or a stomach.

some wearable projects have been interesting as well. Deep Wear showed that machine learning algorithms can design clothes. Future Flora suggests that – as our body is symbiotically dependent on microbes – we could start wearing cloths that keep our body healthy – in this case prevent or treat vaginal infections.

other interesting exhibits

  • VFrame is a software that can help scan huge amounts of video and photo footage and help identify unlit grenades (relevant e.g. in syria)
  • if you would have the information a self-driving car has to operate, would you trust yourself? Who wants to be a self-driving car explored exactly that: put on vr glasses and go drive – you only see what a potential self-driving car could see.
  • the seer moved his eyes and head exactly like i was moving evoking emotions – i was very curious and it felt as if the shaman really showed emotions.
  • i was again shocked by the smile to vote project. it does not only use face recognition to sense yes or no, it also detects what i want to vote! unfortunately i did not try it because the queue was too long… 🙁
  • the fluid solids project was innovative and gave hope: it showed that we could get rid of plastic and that would be awesome!
  • i don’t know how many open (and closed) global data-collection-projects are running in parallel, but Making Sense showed the concept and explained nicely how having evidence from the environment can change the discussions of people.
  • the neurospeculative afrofeminism showed a speculative world in which a person can get bionic upgrades to hide from cameras, enhance your abilities and much more just by visiting a location very similar to a hair dresser.

good or bad?

i liked a lot of exhibits, the lectures and conferences i attended were much to artsy for me, not even scientific enough imho.

i did not quite get, why the 3d printed bridge of amsterdam won versus the fluid solids. getting rid of plastic has much more impact to the world imho that being able to print bridges – however i love bridges as they connect people, but wouldn’t it be better to credit projects that potentially help everybody? well, i guess i just don’t understand art 🙂

IT WAS GREAT visiting ae and i have to be there next year.

but more than one day. and maybe a little bit better prepared 🙂

]]>
https://abendstille.at/blog/?feed=rss2&p=204 1 204
death and birth of a blog https://abendstille.at/blog/?p=199 https://abendstille.at/blog/?p=199#respond Mon, 10 Sep 2018 16:13:10 +0000 https://abendstille.at/blog/?p=199 i love melancholic music. “birth and death of the day” by “explosions in the sky” reminds me of what has happened to this blog.

what happened in 2014

i was lucky to get together with cordula
i stopped coding in php
i co-founded craftworks
i desperately tried to finish my master thesis

looking back now, that was more that i was able to manage. i wanted to invest as much time as possible into my new relationship, i changed my main programming environment, i founded a new company (and wanted to continue being a traveler, musician and super-curious person).

you can guess where the story goes: i did not have enough time to do all this in a proper and satisfying way. believing in the (stupid) assumption that i am only “successful” if i would constantly add directly measurable value to the company, i neglected everything besides work. i felt overwhelmed with the difficulties and obstacles being part of a founders-team. it was very hard for me to find distinction. and with every “big thing” in my life i only came to understand what was beautiful about it, once i didn’t have it anymore.

all in all i had no time to read and write “irrelevant” stuff. it felt as if i personally stabbed Tim-Berners Lee into the back and stopped doing what was the morally right thing to do: publishing my findings and experiences and discussing my opinion about what has to change. in fact i read A LOT of software engineering literature and literally wrote millions of lines of code. today i envision myself working behind a thick opaque curtain of wrong assumptions and client contracts. and it felt as if i did it for nothing (which of course was wrong, the clients were super happy).

and now?

in the end of 2017 i decided i want to leave the private software industry. at first i was only able to articulate the wish to leave “this” behind and start “something” new.

“childish” you may say (as i do now). in fact, that’s totally true. i did not have a clear idea what should come next. i had the feeling that i did something terribly wrong with my life that i had to change. so i wrote down what i am good at and what i am bad at (i changed my decision techniques in the meantime, because i consider this method very, very basic) and decided to continue my studies in computer science.

now i work on disengagement in serious gaming.

i always advocated the position that one can only truly understand something being aware of what does not “work”. being aware that any initial assumption normally is wrong. understanding that “to much a hassle” does not mean “impossible” but evokes a process. also i always wanted to see that design and purpose are considered BEFORE decisions are written in stone (e.g. in form of an individual customer contract).

being an engineer from the heart this field of research is both challenging and interesting for me as i know much about software (and its ecosystem) but only little about human behavior effects and communication. my plan is to find a way to contribute to the field from the perspective of an engineer building awareness for the bigger implications of the use of software and maybe even building new bridges between “hard core” programmers and designers.

le future

so if you want to give this blog a second chance, you might want to do it because i will try to document my findings on my way out of the “garage” of missing purpose and distinction i lived in the last years.

i look forward to a new way of working – the hci at tu wien really is a beautiful place to work at. i will have to shift my metric/output oriented (scrum points, kpi, kloc, jira issues, revenue, …) work style to a more outcome (what am i supporting and why, what am i disapproving on and why, where should the journey go and why) oriented way.

it’ll be beautiful.

]]>
https://abendstille.at/blog/?feed=rss2&p=199 0 199
Google Closure and Facebook JavaScript SDK https://abendstille.at/blog/?p=173 https://abendstille.at/blog/?p=173#respond Tue, 03 Sep 2013 03:22:26 +0000 http://www.abendstille.at/blog/?p=173 I just got the awesome error “fb.login called before fb.init” and remembered all the things i learned @knallgrau why this error could happen but found no solution. Here is why i still got it. And why it took me hours to find it.

what i tried

If you follow the instructions on https://developers.facebook.com/docs/reference/javascript/ and copy paste facebook’s initialization and loading code into one of your closure-classes you’ll get the typical error mentioned above.

Unfortunately fb.init fails silently not telling you why it could not initialize. So i remembered the reasons why i got this error earlier and checked:

  • if i included all.js more than once
  • if the app id was wrong
  • if the app settings were incorrect (no trailing “/”, wrong domain-name, wrong sandbox settings, etc, etc)

nothing was wrong with that. i even wrote the app id into the init-command literally. did not work.

what was wrong

I was desperate. In google closure to preserve names for externs you have to store the function via array access:

window['fbAsyncInit'] = function() {
   FB.init({appId: 1234567890});
};

This was the only difference i made from copy pasting it to my closure class (ok in the code above i removed some lines). Finally i figured i could check if the correct function is written into the global fbAsyncInit. So i went to my console (had to clear millions of lines of hopeless debugging attempts) and typed “window.fbAsyncInit”. And what did i see???

Google Closure played with my code!

function $window$fbAsyncInit$() {
    FB.init({$appId$:1234567890, $channelUrl$:"/channel.html", status:!1, $xfbml$:!0, $oauth$:!0, cookie:!0, $logging$:!0});
  }

of course $appId$ resolved to ‘appId’, but this was the exact problem!

The solution

So the next step was easy: use string literals for my plain object!

window['fbAsyncInit'] = function() {
   FB.init({'appId': 1234567890});
};

worked like charm.

]]>
https://abendstille.at/blog/?feed=rss2&p=173 0 173
List all entity classes managed by Doctrine in Symfony2 controller https://abendstille.at/blog/?p=163 https://abendstille.at/blog/?p=163#comments Sun, 25 Aug 2013 15:25:16 +0000 http://www.abendstille.at/blog/?p=163 In a recent project (virtualidentityag/hydra-twitter) i tried to create a Symfony2 form field that lets you choose from the list of entities managed by doctrine (so that the user could decide which entity he wants to map a REST-resultset to). I thought that there was no easy way doing this. Heres how i found a solution.

My first guess was to kind of reflect the schema and found that there exists a SchemaManager. Unfortunately creating a schema using the schema manager was not helping, i could not derive the class names of the entities. However i was able to find a list of tables that are currently managed (i am working inside a symfony2 controller):

$em = $this->getDoctrine()->getManager();
$tables = $em->getConnection()->getSchemaManager()->listTables();

The tables are all instances of Doctrine\DBAL\Schema\Table. You cannot derive the entity class from the table directly because doctrine does not work that way. Doctrine uses a Hydrator to “deserialize” the result from the database to your own entities.

My second guess was to search Doctrine for functions that would return the full qualified class name for a Table instance. Unfortunately this was not possible. In fact i have to admit that i did not fully understand how doctrine resolves the names that you use in DQL-statements because the dql-parsing code of Doctrine is quite complicated.

As i remembered from Symfony2 example code you do not have to provide the FQCN in a DQL query – it suffices to write “YourBundle:YourEntity”. Doctrine will then automatically find your Entity-class name by getting the namespace for your bundle, supposing you have a sub-namespace called “Entity” and then construct the class name from that coding convention.

But my code should not be dependent from bundle names or any bundle at all. So hard coding this into my code (and following the Symfony/Doctrine convention) was no option for this situation.

On further investigation of Doctrines code i found a way to list all namespaces that are searched for entities using the configuration-object of Doctrine:

$em = $this->getDoctrine()->getManager();
$namespaces = $em->getConfiguration()->getEntityNamespaces();

From here on the next step was easy although dirty and i didn’t like what i’ve done here:

// warning: don't use that!
$entities = array();
$em = $this->getDoctrine()->getManager()
$tables = $em->getConnection()->getSchemaManager()->listTables();
$namespaces = $em->getConfiguration()->getEntityNamespaces();
foreach ($tables as $table) {
    foreach ($namespaces as $namespace) {
        if (class_exists($namespace.'\\'.$table->getName())) {
            break;
        }
    }
    $entities[] = $namespace.'\\'.$table->getName();
}

I showed the code to a friend of mine (@traktorjosh) and he pointed out, that my code would stop working the moment somebody decides to give his table a different name using the Table-annotation: @Table(name=”notWorking”).

Back to start. I was frustrated. Then i decided to take a look at the UpdateCommand that is called when you execute “app/console doctrine:schema:update”. And there i found what i desperately was looking for: the getAllMetadata-method. Who thought it might hide here? The method returns a list of ClassMetadata objects which let you “Get fully-qualified class name of this persistent class”. So the final solution was straight forward:

$entities = array();
$em = $this->getDoctrine()->getManager();
$meta = $em->getMetadataFactory()->getAllMetadata();
foreach ($meta as $m) {
    $entities[] = $m->getName();
}

Finally it looks like i have found a clean method listing all entities. Again hours of headache for 4 lines of code. If you have any suggestions how to make this better please let me know.

]]>
https://abendstille.at/blog/?feed=rss2&p=163 7 163
Backup a Drupal instance on hetzner linux server https://abendstille.at/blog/?p=150 https://abendstille.at/blog/?p=150#comments Wed, 10 Jul 2013 16:49:04 +0000 http://www.abendstille.at/blog/?p=150 Ever wondered how to efficiently backing up a drupal installation on a hetzner – or more generally – on a debian linux server with a secondary backup server? Here is my solution.

Problem description

  • 2 servers (one running debian with the live system that should be backed up)
  • mysql must be backed up
  • file system must be backed up
  • second server (backup-server) has no shell-access, but SFTP

Solution

  • write script to create compressed file-backup (daily, weekly, monthly)
  • mount backup-server via samba/cifs
  • use automysqlbackup to create mysql-backup
  • use rsync to sync mysql-backup to the backup-server

How

File Backup

Since rsync does not work directly with my backup-server and google did not come up with usable results for my search requests (ok maybe i was too stupid to google or to eager to write this script myself) i wrote a script with the following requirements:

  • keep daily backups for a week
  • keep weekly backups for a month
  • keep monthly backups for a year
  • compress the backup
  • copy it via sftp

SFTP problems

As soon as i started to write the script mentioned above i realized that this would not work out easily because normally every secure connection requires a password that sometimes cannot be stored. But in this case there is a solution. Hetzner themselves wrote a good how to. Only thing they missed in their guide is that you must create the ssh-key without password (i hate things without password and thought the password would be stored in a secure place or something, which was a silly assumption looking back). For completeness i include the commands i executed:

# enter an EMPTY password when generating this. if there already
# exists a key-file, make sure you have no existing
# sftp-connections without password
ssh-keygen
ssh-keygen -e -f .ssh/id_rsa.pub | grep -v "Comment:" > .ssh/id_rsa_rfc.pub

# executing the following commands you will be asked for the
# remote passwort, but this should be the last time!
echo "mkdir .ssh" | sftp u15000@u15000.your-backup.de
echo "put .ssh/id_rsa_rfc.pub .ssh/authorized_keys" | sftp u15000@u15000.your-backup.de

Testing

Now you should be able to connect to your backup-server without password using sftp:

sftp u15000@u15000.your-backup.de

Troubleshooting

  • make sure you created the key without password
  • adjust the remote-server adress correctly
  • the .ssh folder might not exist on the remote server (you might have to create it first)
  • the file authorized_keys must follow RFC4716. This means there must not be a comment-line in it.

The file-backup-script

The following backup-script requires a password-less sftp connection, so be sure you followed the steps above. I created the file-backup-script in /usr/local/bin and added it to crontab.

nano /usr/local/bin/autofilebackup
chmod a+x /usr/local/bin/autofilebackup

Furthermore i created a folder named “filebackup” on my backup-server directly in the /-folder of a new sftp connection.

#!/bin/bash

# configuration
LOCAL_BACKUP_PATH='/var/www/drupal.live'
REMOTE_BACKUP_PATH='/filebackup' # remote path where backups will be stored. make sure the subfolders "daily" "weekly" and "monthly" exist!
REMOTE_SERVER=u15000@u15000.your-backup.de

# initialize variables
dc=`date +'%s'`
BACKUP_FILE="live-"
BACKUP_FILE=$BACKUP_FILE`date +"%Y-%m-%d"`
BACKUP_FILE="$BACKUP_FILE.tar.bz2"

# create daily backup
tar -cjf /tmp/live-`date +"%Y-%m-%d"`.tar.bz2 $LOCAL_BACKUP_PATH
echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/daily/$BACKUP_FILE" | sftp $REMOTE_SERVER

# rotate daily backups (delete backups older than a week)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/daily" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 691200` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create weekly backup if sunday
if [ `date +%u` -eq 7 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/weekly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi

# rotate weekly backups (delete backups older than a month)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/weekly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 2678400` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create monthly backup if 1st of month
if [ `date +%e` -eq 1 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/monthly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi


# rotate monthly backups (delete backups older than a year)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/monthly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 31536000` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# clean up local backup
rm /tmp/$BACKUP_FILE

Configuration

There are three variables that can be configured:

  1. LOCAL_BACKUP_PATH – this is where your drupal installation (or whatever you want to backup) lies. As this is the source argument to tar you can add multiple paths here separated by a space.
  2. REMOTE_BACKUP_PATH – this is where your compressed backup will be saved on the backupserver relatively to the root-folder of a new sftp-connection. The script DOES NOT create the folder automatically!
  3. REMOTE_SERVER – these are the connection details for the sftp conncetion

Testing

Execute the above script and depending on how big your folder is the script should be finished sooner or later. You should see the output of the put command and how fast the backup was transfered.

MySQL Backup

For exporting the database to the filesystem where i can copy the backups to the backup server i use automysqlbackup. I did not find an official how to install automysqlbackup correctly but in fact its straight forward:

  • download it from sourceforge to your server
  • extract it
  • move the executable to /usr/local/bin
  • move the configuration file to /etc/automysqlbackup
  • insert backup-destination and username and password (and the databases you want to backup) into the configuration file

I for myself like it to have a local and a remote backup. So i used /var/backups/db as destination for automysqlbackup. Now you should be able to run the mysql backup:

/usr/local/bin/automysqlbackup
ls /var/backups/db

The trouble with rsync – mount backup-server via SAMBA/CIFS

rsync does not support SFTP as protocol so i had to use a trick i do not really like but what choice did i have? I created a local mountpoint for the backup-server and used rsync locally.

Unfortunately mounting a remote server means that you would have to mount it manually every time your server reboots. This is not what we want, so we also have to register the mount in /etc/fstab.

Also on my debian installation mount.cifs was not installed so i had to install it first using apt-get.

apt-get install cifs-utils
nano /etc/fstab

The lines i added in this file are the following:

# /mnt/backup for backup-server and rsync of mysqlfiles
//u15000.your-backup.de/backup /mnt/backup cifs iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=0,gid=0,file_mode=0660,dir_mode=0770 0 0

This is also from the hetzner guide.

The file /etc/backup-credentials.txt (mode 0600) has the following content (oh we all love passwords stored plaintext yeah):

username=USERNAME
password=PASSWORD

Putting it all together

Now we are ready to install our crontab scripts:

EDITOR=nano crontab -e

I added the following lines:

0 4 * * * /usr/local/bin/automysqlbackup
30 4 * * * rsync -rtv --delete /var/backups/db/ /mnt/backup/mysqlbackup/
0 5 * * * /usr/local/bin/autofilebackup

You see that i give every process 30 minutes time to execute. This might be paranoid and you might to decide differently. Another problem i want to point out here is consistency. You won’t get a coherent db-backup and filesystem using this method. To achieve this you would have to set your website to maintenance mode using drush execute the backup scripts as fast as possible and then end maintenance mode.

Of corse for bigger websites this is no solution. You would create a redundant system (using mysql-replication) and run backup the filesystem using virtual machines and snapshots. VMware describes and offers such solutions but there are also others.

A good guide about rsync is this article.

Please let me know if you have any troubles or suggestions!

]]>
https://abendstille.at/blog/?feed=rss2&p=150 1 150
SSH2 Extension for MAMP on MacOS X https://abendstille.at/blog/?p=144 https://abendstille.at/blog/?p=144#respond Thu, 13 Jun 2013 14:19:58 +0000 http://www.abendstille.at/blog/?p=144 If you want to use SSH2 with PHP (say you built a custom deployment system) you need the SSH2-extension for php. Here is how you get it to work using MAMP on OSX.

I recently wrote an article on how you get the autotools installed on osx. Do this first – you ned autoconf, automake and libtools.

Then installing the SSH2 extension is easy as pie:

  1. Install the autotools
  2. Install libssh (as root)
  3. build the ssh2-extension using pecl (you have to give pecl the absolute path, because ssh2 is still beta)

Here is how this looked like on my cli:

cd /usr/local/src
sudo bash
curl -OL http://www.libssh2.org/download/libssh2-1.4.3.tar.gz
tar xzf libssh2-1.4.3.tar.gz
cd libssh2-1.4.3
./configure --prefix=/usr/local/
make
make install
exit
cd /Applications/MAMP/bin/php/php5.3.6/
./bin/pecl install channel://pecl.php.net/ssh2-0.12

Now the only thing remaining is enabling the module in your php.ini.

]]>
https://abendstille.at/blog/?feed=rss2&p=144 0 144
Intl Extension for MAMP on MacOS X https://abendstille.at/blog/?p=131 https://abendstille.at/blog/?p=131#comments Tue, 11 Jun 2013 15:48:24 +0000 http://www.abendstille.at/blog/?p=131 Many Symfony bundles require the intl-php-extension. Unfortunately my MAMP-version lacks it. Here is how i got it to work.

First a list of missing things in osx/mamp when adding the intl extension:

  • autoconf/automake/libtool
  • icu (International Components for Unicode)
  • php-source

I am not sure wth came over apple, but in newer versions of XCode they do not include the auto-tools anymore.

My strategy for building the intl-extension was using pecl. I opted for that because it is said to be configuration-less. Can’t believe it? Me neither. But i think its still a somehow convenient way to install php extensions (when you do not find a binary). Also i always install everything into /usr/local because i think think this is the place where things belong. Don’t forget to change the paths to your preferences.

Ok. Back to work.

Build Tools and ICU

install autoconf/automake/libtool/icu (many thanks to Jean-Sebastien) as root (this is system stuff, its ok to do that as root):

sudo bash
cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-2.68.tar.gz
tar xzf autoconf-2.68.tar.gz
cd autoconf-2.68
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/automake/automake-1.11.tar.gz
tar xzf automake-1.11.tar.gz
cd automake-1.11
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/libtool/libtool-2.4.tar.gz
tar xzf libtool-2.4.tar.gz
cd libtool-2.4
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://download.icu-project.org/files/icu4c/4.8.1.1/icu4c-4_8_1_1-src.tgz
tar xzf icu4c-4_8_1_1-src.tgz
cd icu/source
./configure --prefix=/usr/local
make
make install

Nice. Now you should have all the basics for compiling apple (purposely?) neglected and icu.

PHP source

mamp does not include the php-source. But – and i definitely like this – MAMP provides the source for customization. Very neat. They call it “MAMP components”. Grab the version you need from sourceforge. I needed 2.0.2 because my MAMP version was 2.0.5.

Follow these steps (adjusting your paths and versions):

  1. download the MAMP-components from sourceforge
  2. unpack them – you get a bunch of zips
  3. locate the PHP-version you are using (i was using 5.3.6 as of writing this post)
  4. locate your MAMP-PHP-folder. Mine was located in /Applications/MAMP/bin/php/php5.3.6
  5. create an include/php folder inside this folder so that the following path is valid: /Applications/MAMP/bin/php/php5.3.6/include/php
  6. unpack the contents of the previously downloaded (mamp-components) php-5.3.6.tar.gz to this very folder.
  7. open a terminal and call the configure routine in this folder: ./configure –prefix=/Applications/MAMP/bin/php/php5.3.6
  8. happy pecl-ing!

ok – i did all this on the command line. so here is how my commands looked like (maybe this helps better than the list above – this time no sudo dudes, this is no system stuff!):

cd ~/Downloads
unzip MAMP_components_2.0.2.zip
cd /Applications/MAMP/bin/php/php5.3.6
mkdir include
tar xzf ~/Downloads/MAMP_components_2.0.2/php-5.3.6.tar.gz -C include
mv include/php-5.3.6 include/php
cd include/php
./configure --prefix=/Applications/MAMP/bin/php/php5.3.6

We now have extended the basic MAMP-version with the php source code.

Finally install Intl-PECL extension

I did this after switching to the corresponding php-version folder.

cd /Applications/MAMP/bin/php/php5.3.6
./bin/pecl install intl

The installation routine asks you for the location of your ICU installation. I answered /usr/local as location. Adjust this to your setup if you used different paths.

Enable the intl-extension in your php.ini by adding extension=intl.so to it. And don’t forget to restart MAMP.

That should have been it. HTH? Tell me in comments or contact me via twitter @gruzilla if you’re having troubles following this post.

Cleaning up

We do not need the files generated in /usr/local/src – but watch out, that you only delete what you created just now. Also the MAMP components in your Downloads folder can obviously be deleted.

Troubleshooting

Some of my friends encountered problems following the above guide. Here are some hints.

1) fatal error: ‘php.h’ file not found

Make sure you installed the MAMP-components correctly. Make sure the file php.h (and all files along with it) are in the correct place. The correct path would be:

/Applications/MAMP/bin/php/php5.3.6/include/php/main/php.h

Of course this might differ if you use another php-version.

2) Unable to detect ICU prefix or … failed. Please verify ICU install prefix and make sure icu-config works.

Now here we have two possibilities: a) you installed ICU in the wrong directory (wrong –prefix building icu) or b) you gave pecl the wrong answer where your ICU libraries are located.

  • The correct prefix is /usr/local
  • The correct answer is either /usr/local or /usr/local/ what ever workes for you

If you have built the ICU libraries in the wrong location (this happens when you give it some other prefix) you might want to uninstall it first by using make uninstall.

3) My PHP binary wont find intl functions although everything else worked!

Ok – there can be multiple reasons to that.

  • you might be using the wrong php-binary: execute which php in your terminal to check which binary is used. You can change this by altering your $PATH environment variable. Do not forget to restart your terminal.
  • you might have changed the wrong php.ini file. execute php -i | grep php.ini to check which php.ini is loaded by your binary and change correct file.
  • the intl.so file might not have been created in the correct location (or building might have failed) – check that the file exists. In my environment the file is located here: /Applications/MAMP/bin/php/php5.3.6/lib/php/extensions/no-debug-non-zts-20090626/intl.so

If you encounter any other problems, please let me know in the comments or contact me directly.

]]>
https://abendstille.at/blog/?feed=rss2&p=131 7 131
Google+ https://abendstille.at/blog/?p=121 https://abendstille.at/blog/?p=121#comments Fri, 08 Jul 2011 00:17:28 +0000 http://www.abendstille.at/blog/?p=121 everybody does it right now so here are my rather ironic and maybe over exaggerated 10 cents to +

just let me say some introductory words: i like +, i’d prefere it to fb, if everybody was on it (not only geeks and fags who think they have to just because its new). you made the point: many features + has are heavily missed features in fb! i somewhere read (not literally) “you use your fb, but you make your +”. + is, for the time being, not a social, but a medial experience to me. i can customize a lot, and that is just purely awesome. my suggestion: let this be your primary rule of concept, because fb, doesn’t let me decide crucial things (don’t know why. money?) and i hate that since feel mature. i tried to think about what is most irritating me about +’s behaviour…

1) want to mute circles.
-> created a “roflwhoareyou”-circle to add people i don’t care about, people i simply do not know (maybe your suggestion-thing did not work properly), still their “totally interesting” posts appear in my stream. this sucks, because i simply don’t want to see them. Now don’t come with “then don’t add them, because then i’d say, ok – just explain me the sense of circles again please.

2) reset circles when writing posts, or let me define the default!
ex:
write message to circle “men”
-> yey! vaginal!

write message to circle “women”
-> oh damn fu, just sent it to “men” because + remembered my last msg went to men 🙁
now everybody thinks i’m gay. shit.

3) want to see combined circles stream
-> i only can see one stream or all streams at once. sucks.

4) want to do a full text search on my (filtered) stream.
-> cannot! oh, sorry i forgot: you don’t have any xp in searching…

5) want to add friends to chat without awaiting their confirmation
-> i’ve already added them to a circle i trust, they have done so too, so why do we have to confirm this again? You give a shit about if i add somebody to a circle, and tell everybody about it. Be consistent: if they don’t want to talk to me, they wont. If they don’t want that i can add them without their confirmation then let them decide

6) generally, +defaults are crap. + is the gentoo of social media networks, so let ME (=ma) decide.
-> if i want that somebody can only add me to chat with my confirmation let MA decide (not you, crappy google!)

7) search twitter hashtags in sparks!
-> sparking zendfw doesn’t find anything about Zend Framework although its a common twitter-hashtag. don’t you search twitter, because its not yours? your whole business is built upon others content, so include that.

8 )  show me my online friends in hangouts!
-> hangouts are awesome, but this would just add the finishing touch. besides youtube vids are sometimes not played synchronously.

9) don’t want to see double posts if someone shared a friends post.
-> make something like “shared by XY”, its really annoying if 4 of my friends shared the same message of one of my friends. not only annoying, this is SPAM! (you know that spam is bad for user xp, dont you? google mail *wink*)

10) cannot find people by [any f** social network]-name
-> come on google? you’re the #1 search engine on teh interwebz! don’t tell me you do not know my twitter friends.

]]>
https://abendstille.at/blog/?feed=rss2&p=121 2 121
Instagram blocks IP? https://abendstille.at/blog/?p=117 https://abendstille.at/blog/?p=117#comments Sun, 09 Jan 2011 14:30:13 +0000 http://www.abendstille.at/blog/?p=117 I recently created a PHP-Instagram API called phpinstagram and published it on github. I also set up this API on my own webserver and got quite high response as far as stats are concerned. Yesterday my service stopped to work and it seems as if Instagram is blocking my webservers IP-address.

My blog post about instagram was nothing new as @mislav already documented the unofficial instagram api very detailled.

Today @rennie72 tweeted me that my instagram api is not working anymore (thanks about that!).  I then started to try to fix the issue but unfortunately i could not reproduce the error locally. I put my code up on another server and tried to check if the error is because I made a mistake. But as the service works on my local machine and on another webserver I expect instagram to block my original webservers ip-address.

Instagram says in its faq that they are “interested in helping developers build on top of our platform”. But as it seems as if they block ip-addresses I cannot see how they help devs to build on their platform.

I looked onto my AWStats and found that i got 4.4k requests only to my api! I never got that much requests any time before! 🙂

Then i found out that my api was mentioned several times on the net (which I also didn’t know):

Somebody even wrote a request on instagrams zendesk which i cannot access.

All this motivates me to further improve the php-api! I will try to get some time free for this the next months! Lets home Instagram removes my Webservers IP from their block list 🙁

]]>
https://abendstille.at/blog/?feed=rss2&p=117 5 117
Instagram API for PHP https://abendstille.at/blog/?p=107 https://abendstille.at/blog/?p=107#comments Tue, 07 Dec 2010 00:16:28 +0000 http://www.abendstille.at/blog/?p=107 Recently i wondered why instagr.am did not publish its api, especially because i assumed them to already use one. Therefore i created a wlan on my macbook, connected the iphone with it and sniffed some instagram traffic using wireshark. What i found is the following API. Then i started to develop a PHP-library for it at github.

[UPDATE] yesterday @mislav contacted me and linked me his api-documentation which is much nicer than mine 🙂 here you go: https://github.com/mislav/instagram/wiki

The base for all urls is http://instagr.am/api/v1/ – “v1” lets us expect, that they currently develop v2 which will probably be released some time. Normally instagr.am uses gzip as enc-type for communication.

The following list is not complete. You may guess other actions from the url-names, i only tested a few and only some of them made it into my PHP-library.

“pk” stands for primary key i suppose.

Action URL Parameters response
login accounts/login/ username, password, device_id {“status”:”ok”}{“status”:”failed”, “message”:”some error message”}
user details users/[pk]/info/ {“status”: “ok”, “user”: {“username”: “xxx”, “media_count”: 0, “following_count”: 0, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”, “follower_count”: 0, “pk”: pk}}
post comment media/[pk]/comment comment_text {“comment”: {“media_id”: pk, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”:1}
change media data media/configure device_timestamp=0&caption=xxx&location={“name”:”xxx”,”lng”:0,”lat”:0,”external_id”:0, “address”:”xxx”,”external_source”:”foursquare”} &source_type=0&filter_type=15 {“status”: “ok”, “media”: {“image_versions”: [{“url”: “xxx.jpg”, “width”: 150, “type”: 5, “height”: 150}, {“url”: “xxx.jpg”, “width”: 306, “type”: 6, “height”: 306}, {“url”: “http://xxx.jpg”, “width”: 612, “type”: 7, “height”: 612}], “code”: “xxx”, “likers”: [], “taken_at”: 0, “location”: {“external_source”: “foursquare”, “name”: “xxx”, “address”: “xxx”, “lat”: 0, “pk”: pk, “lng”: 0, “external_id”: 0}, “filter_type”: “15”, “device_timestamp”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xx.jpg”, “full_name”: “xxx”}, “media_type”: 1, “lat”: 0, “pk”: 0, “lng”: 0, “comments”: [{“media_id”: 0, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”: 1}]}}
like media media/[pk]/like/ {“status”: “ok”}
upload media media/upload (multipart/form-data) device_timestamp=0 lat=0 lng=0 photo=(binary data)filename=”file” {“status”: “ok”}
show friend details friendships/show/[pk] {“following”: true, “status”: “ok”, “incoming_request”: false, “followed_by”: true, “outgoing_request”: false}
show own feed feed/timeline/? (the last ? may be a number that defines how many elements you want to load, but i haven’t tested it so far) what you get back is very massive (and would be a repetition), therefore only an overview:

{“status”:”ok”, “items”:[— a list of media-items (similar tothe response to changing a media —],”num_results”:0}

show feed of a user feed/user/? (same as above) (same as above)

As i already said, there are definitely more actions that are possible. But i didn’t try everyone. I’m happy with reading from instagr.am, what is what i did at ma instagram.

I hope i can bring this api into a nice form to read in the near future.

Hope that helps someone.

]]>
https://abendstille.at/blog/?feed=rss2&p=107 4 107