Wikis are social web sites enabling a potentially large number of participants to modify any page or create a new page using their web browser. As they grow, wikis suffer from a number of problems (anarchical structure, large number of pages, aging navigation paths, etc.). We believe that semantic wikis can improve navigation and search. In SweetWiki we investigate the design of a wiki built around a semantic web server i.e. the use of semantic web technologies to support and ease the lifecycle of the wiki. The very model of wikis was declaratively described using semantic web frameworks: an OWL schema captures concepts such as WikiWord, wiki page, forward and back link, author, date of modification, version, etc. This ontology is then exploited by an existing semantic search engine (Corese) embedded in our server. In addition, SweetWiki integrates a standard WYSIWYG editor (Kupu) that we extended to directly support semantic annotation following the "social tagging" approach made popular by web sites such as flickr.com or del.icio.us and by the technorati.com search engine. When editing a page, the user can freely enter some keywords in an AJAX-powered textfield. An auto-completion mechanism proposes existing keywords issuing SPARQL queries to identify existing concepts with compatible labels and shows the number of other pages sharing these concepts. With this approach, tagging is both easy (keyword-like) and motivating (real-time display of the number of pages linking to). Thus concepts are collected and used as in folksonomies. In order to maintain and re-engineer the folksonomy, SweetWiki reuses web-based editors available in the underlying semantic web server to edit semantic web ontologies and annotations. Another distinctive feature of SweetWiki is its persistence mechanism: unlike other wikis, its pages are stored directly in XHTML thus ready to be served to browsers. Semantic annotations are located in the wiki pages themselves using the RDF/A syntax under specification at W3C. Therefore, if someone sends a wiki page to someone else the annotations follow it, and if an application crawls the wiki site it can extract the metadata and reuse them.