Service robots need many different informations about the objects, to grasp and manipulate them. Besides the physical information such as the geometry and weight, semantic information of the objects is also needed. To model both of these informations, we have constructed a multimodal object modeling center. It can model the physical properties of the object, such as the textures and the 3D geometry, with a digitizer and movable stereo cameras. Other properties of the objects relevant for grasping can also be automatically computed. Further, a human teacher can communicate with the system with multimodal techniques to bring the semantic information about the grasping to the system. The modeled information in this modeling center covers all the information a grasp planning needs. We have implemented a grasp planning system based on the grasp simulator ldquoGraspIt!rdquo to plan high quality grasps. The semantic information is represented as shape primitives, which are treated by the grasp planning as obstacles or must-touch regions of object to influence the resulted grasps. The modeled physical, semantic and automatically computed information, together with the computed grasps are saved in a database, which provides the service robot the needed knowledge to grasp household objects.